Weekly Activity: Automating Workflows with Claude Code and AI Agents
text: human code: AI
This week was all about automation, AI agents, and building systems that write themselves. I focused on three main areas: integrating Claude Code into GitHub workflows, evolving the Omega self-coding Discord bot, and building Blocks—a domain-driven validation framework for AI-powered development.
Automating Blog Posts with Claude Code
The most exciting development this week was setting up an automated weekly activity blog post system. Instead of manually reviewing git commits and crafting blog posts, I built a GitHub Actions workflow that does it all automatically.
How It Works
Every week, a GitHub Action creates an issue containing a comprehensive summary of all my GitHub activity:
name: Create Weekly Activity Issue
on:
schedule:
- cron: '0 8 * * 3' # Every Wednesday at 8am UTC
workflow_dispatch:
jobs:
create-issue:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup pnpm
uses: pnpm/action-setup@v3
with:
version: 9.15.0
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Create activity issue
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: node apps/homepage/scripts/create-activity-issue.js
The create-activity-issue.js script fetches all my GitHub activity using the Octokit API and formats it into a structured issue:
async function fetchGitHubActivity(username, sinceDate) {
const octokit = new Octokit({ auth: process.env.GITHUB_TOKEN });
const activities = [];
const repoDetails = new Map();
// Use the authenticated user endpoint - gets ALL activity including from orgs
const response = await octokit.activity.listEventsForAuthenticatedUser({
per_page: 100,
});
// Process events and group by repository
for (const event of response.data) {
// Extract commits, PRs, issues, etc.
}
return { activities, repoDetails };
}
The Claude Code Integration
Here’s where it gets interesting. The issue body includes an @claude mention with detailed instructions for creating a technical blog post. When the issue is created, a GitHub Actions workflow triggers Claude Code:
name: Claude Code
on:
issues:
types: [opened, assigned]
issue_comment:
types: [created]
jobs:
claude:
if: |
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'issues' && github.event.action == 'opened')
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
issues: write
steps:
- uses: anthropics/claude-code-action@v1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
Claude Code reads the activity summary, analyzes the code changes, and writes a comprehensive technical blog post—complete with code examples, explanations, and links. It then creates a pull request with the new blog post.
Auto-Merge with Validation
To close the loop, another workflow automatically merges the PR after validation:
name: Auto-Merge Activity Post PR
on:
pull_request:
types: [opened, synchronize, reopened]
branches:
- master
jobs:
auto-merge:
if: |
contains(github.event.pull_request.labels.*.name, 'activity-post') ||
startsWith(github.event.pull_request.title, 'Weekly Activity')
steps:
- name: Build homepage
run: |
cd apps/homepage
npx json-blog build blog.json
- name: Verify build
run: |
if [ ! -d "apps/homepage/build" ]; then
echo "Build directory not found"
exit 1
fi
- name: Auto-merge PR
run: gh pr merge ${{ github.event.pull_request.number }} --squash --auto
The result? A completely automated blog publishing pipeline that:
- Fetches my weekly GitHub activity
- Creates an issue with structured data
- Triggers Claude Code to write a technical blog post
- Creates a PR with the new post
- Validates the build
- Auto-merges and deploys
This is the post you’re reading right now, generated by this exact workflow.
Omega: The Self-Coding Discord Bot
Omega is a Discord bot that writes its own code. It started with zero functionality and has evolved through conversations to build 18+ tools, continuing to grow based on user needs.
The Self-Coding Loop
The bot has full access to edit its own codebase. When you ask for a feature, it:
- Writes the code for that feature
- Commits the changes to git
- Deploys the update via GitHub Actions
- Uses the new feature
This week, I made several enhancements to Omega’s development workflow and capabilities.
Enhanced Code Query Tool
One major addition was an AI-powered code query tool that lets Omega introspect its own codebase. Users can ask questions like “How does the artifact tool work?” and Omega will search its own code, analyze it with AI, and explain its implementation:
async function analyzeCodeWithAI(
files: Array<{ path: string; content: string }>,
query: string,
model: LanguageModelV1
): Promise<string> {
const systemPrompt = `You are a code analysis assistant...`;
const userPrompt = `
Query: ${query}
Files:
${files.map(f => `
File: ${f.path}
\`\`\`
${f.content}
\`\`\`
`).join('\n')}
Please provide a comprehensive analysis...
`;
const { text } = await generateText({
model,
system: systemPrompt,
prompt: userPrompt,
});
return text;
}
The tool can:
- Search the codebase using pattern matching
- Read full file contents for context
- Analyze code with GPT-4o
- Summarize implementations
- Explain architecture decisions
Improved Claude Code Workflow
I also set up automated PR creation for Omega. When Claude Code makes changes on a claude/** branch, a GitHub Action automatically:
- Merges the latest main branch to resolve conflicts
- Fixes pnpm lockfile issues if needed
- Creates a PR with the issue title
- Adds proper labels
- Deploys to Railway for testing
- name: Auto-resolve merge conflicts
run: |
git fetch origin main
git merge origin/main --no-commit --no-ff || true
# Prefer Claude's changes (ours since we're on Claude branch)
git checkout --ours .
git add .
git commit -m "Merge main and resolve conflicts" || true
- name: Fix pnpm lockfile
run: |
pnpm install --no-frozen-lockfile
git add pnpm-lock.yaml
git commit -m "Update pnpm lockfile" || true
- name: Create PR
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh pr create \
--title "$ISSUE_TITLE" \
--body "Fixes #$ISSUE_NUMBER..." \
--label "claude-code"
Multi-Channel Support
Previously, Omega only responded in a dedicated #omega channel. This week I updated it to respond in any channel when tagged, and to read past the default 20 message limit for better context:
export async function shouldRespond(message: Message): Promise<ShouldRespondResult> {
// Always respond to DMs
if (message.channel.type === ChannelType.DM) {
return { shouldRespond: true, confidence: 100, reason: 'Direct message' };
}
// Respond in any channel when mentioned
const botMentioned = message.mentions.has(message.client.user!);
const botTagged = message.content.includes('@Omega');
if (botMentioned || botTagged) {
return { shouldRespond: true, confidence: 100, reason: 'Bot mentioned' };
}
return { shouldRespond: false, confidence: 0, reason: 'Not mentioned' };
}
Blocks: Domain-Driven Validation for AI Development
Blocks is a new framework I’m building to control AI code generation through explicit domain semantics and multi-layer validation. It’s designed to solve a critical problem: how do you maintain consistency when AI tools generate code quickly but without a design system?
The Problem
Modern AI coding tools (Claude Code, Cursor, GPT engineers) generate code fast, but without explicit domain modeling, output becomes inconsistent and unmaintainable. You might ask for a “user engagement score” and get wildly different implementations depending on the day, the context, or which AI you’re using.
The Solution: Domain Modeling + Drift Detection
Blocks provides a semantic layer for defining your domain, then validates that generated code aligns with that domain:
# blocks.yml - Your source of truth
project:
name: "My Project"
domain: "myproject.general"
philosophy:
- "Blocks must be small, composable, deterministic."
- "Use functional programming patterns."
- "Avoid side effects in business logic."
domain:
entities:
user:
fields: [id, name, email, join_date]
signals:
engagement:
description: "User engagement indicators"
extraction_hint: "Derived from user actions like logins, posts, comments"
measures:
score_0_1:
constraints:
- "Must be between 0 and 1"
- "Must handle null/undefined values"
blocks:
user_engagement_score:
description: "Calculate user engagement score"
inputs:
- name: user
type: entity.user
outputs:
- name: score
type: measure.score_0_1
measures: [signal.engagement]
domain_rules:
- id: must_use_engagement
description: "Score must use engagement signals"
Multi-Layer Validation
Blocks validates code at multiple layers:
- Schema Validation - Ensures I/O signatures match the definition
- Shape Validation - Checks file structure and naming
- Domain Validation - AI-powered semantic analysis
The domain validator is the most interesting. It reads all files in your block directory and uses GPT-4o to validate semantic correctness:
async validateDomainSemantics(params: {
blockName: string;
blockDefinition: string;
files: Record<string, string>;
domainRules?: string[];
philosophy?: string[];
}): Promise<{
isValid: boolean;
issues: Array<{ message: string; severity: "error" | "warning"; file?: string }>;
}> {
const schema = z.object({
isValid: z.boolean(),
issues: z.array(z.object({
message: z.string(),
severity: z.enum(["error", "warning"]),
file: z.string().optional(),
})),
});
const prompt = `You are validating that code implementation matches domain requirements.
Block Definition:
${params.blockDefinition}
Philosophy:
${params.philosophy?.join('\n') || 'None'}
Domain Rules:
${params.domainRules?.join('\n') || 'None'}
Implementation Files:
${Object.entries(params.files).map(([path, content]) => `
File: ${path}
\`\`\`
${content}
\`\`\`
`).join('\n')}
Validate that the implementation:
1. Follows the stated philosophy
2. Complies with all domain rules
3. Uses the specified entities, signals, and measures correctly
4. Matches the semantic intent of the block definition
Return validation results.`;
const result = await generateObject({
model: this.model,
schema,
prompt,
});
return result.object;
}
Development-Time Focus
Crucially, Blocks is a development-time framework, not a runtime validator. The goal is to guide developers and AI agents to write correct code through semantic feedback loops, not to enforce constraints at runtime.
This week I published the first version to npm and deployed documentation:
- CLI: @blocksai/cli
- AI Validators: @blocksai/ai
- Docs: blocks.thomasdavis.dev
Example: JSON Resume Themes
I created an example project showing how to use Blocks for validating JSON Resume themes:
# Define domain for resume themes
domain:
entities:
resume:
fields: [basics, work, education, skills, awards, publications]
measures:
typography_scale:
constraints:
- "Font sizes must follow modular scale (1rem, 1.25rem, 1.5rem, 2rem)"
color_contrast:
constraints:
- "Must meet WCAG AA standards (4.5:1 for body text)"
blocks:
modern_professional_theme:
description: "Clean, professional resume theme"
inputs:
- name: resume
type: entity.resume
outputs:
- name: html
type: string
measures: [typography_scale, color_contrast]
domain_rules:
- id: must_be_print_friendly
description: "Theme must render properly when printed (max 2 pages)"
- id: must_be_responsive
description: "Must work on mobile, tablet, and desktop"
When you run blocks run modern_professional_theme, it validates:
- Schema: Does the theme accept a resume and output HTML?
- Shape: Is the file structure correct?
- Domain: Does it follow typography scales, meet contrast requirements, and work on all devices?
Other Updates
Dillinger Modernization
I updated Dillinger, the Markdown editor, to use modern dependencies:
{
"preinstall": "npm install --package-lock-only --ignore-scripts",
"resolutions": {
"graceful-fs": "^4.2.11"
}
}
Removed the npm-force-resolutions dependency and cleaned up the build process.
Package Manager Updates
Upgraded pnpm across all projects from 7.15.0 to 9.15.0:
{
"packageManager": "pnpm@9.15.0"
}
And updated GitHub Actions workflows to match:
- name: Setup pnpm
uses: pnpm/action-setup@v3
with:
version: 9.15.0
Reflections on AI-Assisted Development
This week really demonstrated the power of AI-assisted development workflows. The combination of:
- Claude Code for implementation
- GitHub Actions for automation
- Structured prompts and domain modeling (Blocks)
- Self-coding agents (Omega)
Creates a development environment where:
- Ideas become issues
- Issues become code
- Code becomes PRs
- PRs become deployments
- Deployments become learning
The key insight is that AI works best with structure. The more you can formalize your domain (through tools like Blocks), your processes (through GitHub Actions), and your intent (through well-crafted prompts), the better the results.
This blog post itself is proof: written entirely by Claude Code based on structured GitHub activity data, following explicit format requirements, and validated through automated builds before merging.
Links
Projects
- Omega Discord Bot: github.com/thomasdavis/omega
- Blocks Framework: github.com/thomasdavis/blocks | docs
- This Blog: github.com/thomasdavis/lordajax.com
- Dillinger: github.com/thomasdavis/dillinger
NPM Packages
- @blocksai/cli - Blocks command-line interface
- @blocksai/ai - AI-powered validators
Tools & Frameworks
- Claude Code - AI pair programming
- GitHub Actions - CI/CD automation
- JSON Blog - Static site generator
- Vercel AI SDK - AI application framework
Next Steps
Looking ahead, I plan to:
- Expand Blocks with more validator types (lint, chain, shadow, scoring)
- Add more tools to Omega based on user requests
- Improve the Claude Code auto-merge workflow with better conflict resolution
- Create more example projects demonstrating Blocks patterns
- Write detailed documentation on building self-coding agents
The future of development is collaborative: humans defining intent and domain models, AI generating implementations, and validation frameworks ensuring consistency. We’re just getting started.