Skip to main content

Finished reading? Get articles like this every Tuesday

#claude Updated

Prompt Engineering for Claude Code - The .NET Developer's Guide

Write effective Claude Code prompts for .NET 10 projects. The 4-layer instruction hierarchy, 10 Bad vs Better patterns, and a decision matrix from 6 months of daily use.

claude

claude-code prompt-engineering dotnet ai-coding claude-md ai-assistant csharp aspnet-core developer-productivity ai-prompts claude-code-tips dotnet-10 coding-assistant ai-tools developer-tools prompt-patterns claude-skills plan-mode

20 min read
20.3K views

Six months ago, I gave Claude Code this prompt for a .NET project:

“Add authentication to my API.”

It generated a custom JWT implementation from scratch: 200+ lines of middleware, token generation, validation, and a custom AuthenticationHandler. None of it matched the existing Identity setup in my Program.cs. I spent two hours untangling the mess before deleting everything and starting over.

Last week, I gave it this prompt instead:

“Add JWT bearer authentication to the API. Use Microsoft.AspNetCore.Authentication.JwtBearer. Read Program.cs for the existing DI setup. Read appsettings.json for the JWT configuration section. Register the authentication middleware after UseRouting but before UseAuthorization. Add an [Authorize] attribute to all controllers in the Controllers/ folder except AuthController.”

It completed the task in one shot: correct middleware ordering, correct package, correct configuration binding, no rework needed.

The difference? Not a smarter model. Better prompts.

Prompt engineering for Claude Code is the skill that determines whether AI coding saves you hours or costs you hours. After six months of using Claude Code daily for .NET development, I have refined a system that consistently produces first-try results. This guide covers everything: the 4-layer instruction hierarchy, 10 real Bad vs Better prompt patterns for .NET, and the decision matrix for choosing the right interaction mode.

Let’s get into it.

What Is Prompt Engineering for Claude Code?

Prompt engineering for Claude Code is the practice of writing precise, context-rich instructions that leverage Claude Code’s ability to read your entire codebase, producing correct, idiomatic code on the first attempt.

This is fundamentally different from prompting ChatGPT or Copilot. Claude Code is not a chatbot that guesses based on your description. It has direct access to your files, your project structure, your Program.cs, your .csproj dependencies, and your existing patterns. The quality of your output depends entirely on how well you direct that access.

Three things make Claude Code prompting unique:

  1. Codebase awareness - Claude Code reads your files proactively. It explores your project structure, picks up existing patterns, and mimics your conventions without being told. This shifts the frontier of prompt engineering: you no longer need to describe your architecture - you need to define scope and constraints.
  2. Persistent instructions - You can set rules once in CLAUDE.md that apply to every prompt, so you do not repeat yourself. CLAUDE.md is now the single highest-leverage file in any .NET project I work on.
  3. Multi-mode interaction - Plan mode for thinking, direct prompts for doing, skills for automation, subagents for parallelism. Each mode has a sweet spot.

The official Claude Code documentation covers the tool’s capabilities, but it does not show you how to prompt effectively for .NET projects. That is what this guide fills.

The most common failure mode in 2026 is not Claude Code generating code that does not match your patterns - it is Claude Code being too eager: rewriting 15 files when you asked for changes to one, adding abstractions you did not request, or solving problems you did not have. The skill is no longer “read first, do not describe” - it is specifying scope, non-goals, and verification bars so Claude Code does exactly what you asked and stops there.

The 4-Layer Instruction Hierarchy

This is the framework that changed everything for me. Instead of putting all instructions into session prompts, I split them across four layers, each with a specific purpose and lifetime.

Layer 1: CLAUDE.md (Project-Level Context)

What goes here: Project architecture, tech stack, coding conventions, file paths. Anything Claude Code needs to know for every task in this project.

# Project: InventoryAPI
## Tech Stack
- .NET 10, ASP.NET Core Web API, EF Core 10
- PostgreSQL via Npgsql
- Scalar for API docs (NOT Swagger)
- .slnx solution format
## Architecture
- Vertical slice architecture
- Features/ folder with one folder per feature
- Each feature: Handler, Request, Response, Validator, Endpoint
- No repository pattern, EF Core DbContext injected directly
## Conventions
- Primary constructors for DI
- File-scoped namespaces
- CancellationToken on every async method
- IResult return types for minimal API endpoints

When to promote to CLAUDE.md: If you have said the same thing in three separate prompts, it belongs here. I track this. When I catch myself typing “use primary constructors” for the third time, I add it to CLAUDE.md and never type it again. The Claude Code best practices guide recommends this same approach: invest in your CLAUDE.md early.

Layer 2: Rules (Targeted Behavior)

What goes here: Specific rules that apply to certain file types, patterns, or scenarios. Rules live in .claude/rules/ and are always loaded.

.claude/rules/ef-core.md
- Always use async EF Core methods (ToListAsync, FirstOrDefaultAsync)
- Never use .Result or .Wait() on async calls
- Add CancellationToken parameter to every query method
- Use .AsNoTracking() for read-only queries
- Configure entities with IEntityTypeConfiguration, not OnModelCreating

Rules are more focused than CLAUDE.md. Where CLAUDE.md says “this project uses EF Core 10”, a rule says “when writing EF Core queries, always do X, Y, Z.”

Layer 3: Skills (Reusable Workflows)

What goes here: Multi-step workflows you run repeatedly. Skills live in .claude/skills/ and are invoked with /skillname.

.claude/skills/endpoint/SKILL.md
Create a new minimal API endpoint:
1. Create Features/[Name]/[Name]Endpoint.cs
2. Create Features/[Name]/[Name]Request.cs with FluentValidation validator
3. Create Features/[Name]/[Name]Response.cs as a record
4. Create Features/[Name]/[Name]Handler.cs with the business logic
5. Register the endpoint in Program.cs using MapGroup
6. Follow the existing pattern in Features/Products/ as reference

Instead of typing out 6 instructions every time I need a new endpoint, I type /endpoint CreateOrder and the skill handles everything.

Layer 4: Session Prompts (One-Time Instructions)

What goes here: The specific task you need done right now. This is the prompt you type in the terminal.

The key insight: session prompts should be lean because layers 1-3 already carry the persistent context. You do not need to say “use primary constructors and file-scoped namespaces” in every prompt if CLAUDE.md already says it.

When to Use Each Layer

SignalRight Layer
Said it once, task-specificSession prompt
Said it 3+ times for different tasksCLAUDE.md
Applies to specific file types or patternsRule
Multi-step workflow you repeat weeklySkill
Team-wide standard, all developersCLAUDE.md (committed to git)
Personal preference, just youRule (in .claude/rules/)

My take: Most developers put too much in session prompts and too little in CLAUDE.md. The single change that improved my first-try success rate the most was moving repeated instructions up to the right layer. If you are typing the same context in every prompt, you are doing it wrong.

10 Bad vs Better Prompt Patterns for .NET

These are real patterns from my daily .NET work. Each shows a bad prompt, a better prompt, and why the difference matters in 2026.

The theme across all 10: Claude Code already mimics your existing patterns automatically. The leverage is in scope, constraints, and verification bars - things Claude Code cannot infer from your codebase.

1. Scaffolding a New API Endpoint

Bad:

“Create an endpoint for creating orders.”

Better:

“Create a POST /api/orders endpoint following the existing vertical slice convention. The handler should validate the request, check product stock via AppDbContext, create the Order entity, and return the order ID. Only create files under Features/Orders/. Do not modify Program.cs except to register the new MapGroup. Do not add any new packages.”

Why it matters: Claude Code will discover your Features/ convention on its own - you do not need to point it at a specific file. The leverage is in defining behavior, scope (files under Features/Orders/), and non-goals (no new packages, do not touch Program.cs beyond what is needed). The bad prompt invites Claude Code to be helpful in places you did not ask.

2. Debugging a Failing EF Core Query

Bad:

“My EF Core query is slow, fix it.”

Better:

“The GET /api/products endpoint takes 3+ seconds for 1,000 records. Target P99 under 200ms. Investigate first - look for N+1 queries, missing indexes, unnecessary tracking. Propose changes with expected impact, then I will approve before you modify anything.”

Why it matters: “Slow” is not actionable. The better prompt gives the baseline (3+ seconds), the target (200ms), and a verification gate (propose first, approve before modifying). The “investigate first, propose, then change” pattern prevents Claude Code from rewriting the entire query layer when a missing index would have done the job.

3. Refactoring Legacy Code to Modern C#

Bad:

“Modernize this code.”

Better:

“Refactor Services/OrderService.cs from constructor injection with private readonly fields to primary constructors. Convert all nested null checks to null-conditional operators and pattern matching. Keep the existing method signatures unchanged. Only modernize the internals. Do not change any business logic.”

Why it matters: “Modernize” means different things to different developers. The better prompt specifies exactly which modernizations to apply and explicitly says what NOT to change. The constraint “keep method signatures unchanged” prevents breaking changes.

4. Writing Unit Tests

Bad:

“Write tests for the order service.”

Better:

“Write xUnit tests for OrderService.CreateOrderAsync. Cover: happy path, insufficient stock, invalid product ID, and concurrency conflict. Match the AAA pattern used in the existing test project. Do not modify the OrderService implementation. Do not add new test infrastructure - use what is already there.”

Why it matters: Claude Code will pick up your existing test conventions on its own. What it cannot infer is which scenarios to cover and what is off-limits (modifying the implementation under test, adding parallel test infrastructure). Specifying the test cases explicitly is the single highest-value constraint for test prompts.

“Write xUnit tests for Features/Orders/CreateOrderHandler.cs. Read the handler first. Mock AppDbContext using an in-memory SQLite database (not InMemory provider, it does not enforce constraints). Cover: successful order creation, insufficient stock (should throw InvalidOperationException), invalid product ID (should throw NotFoundException). Follow the Arrange-Act-Assert pattern. Use FluentAssertions for assertions.”

Why it matters: Without specifics, you get generic happy-path tests. The better prompt names the test framework, the mocking strategy (and why: SQLite over InMemory), the specific scenarios, and the assertion library. Every test Claude Code writes will match your existing test suite.

5. Adding a Feature to Existing Code

Bad:

“Add caching to the product service.”

Better:

“Add HybridCache to GetProductsHandler. TTL 5 minutes sliding. Cache key format: products-list-PAGE-PAGESIZE. Invalidate on writes in CreateProductHandler and UpdateProductHandler. Do not modify Program.cs beyond registering AddHybridCache if missing. Do not add any new packages other than Microsoft.Extensions.Caching.Hybrid. Do not refactor surrounding code.”

Why it matters: “Add caching” leaves open: what type of cache? What key pattern? What TTL? When to invalidate? And what NOT to touch. The better prompt answers all of those - including the negative space (no surrounding refactors, no new packages beyond the one needed). This is the prompt-engineering pattern with the most teeth in 2026: bound the blast radius.

6. Code Review and Finding Bugs

Bad:

“Review my code for bugs.”

Better:

“Review Features/Orders/ for these specific issues: (1) missing CancellationToken propagation in async methods, (2) EF Core queries that could cause N+1 problems, (3) missing input validation before database calls, (4) exception handling that swallows errors silently. For each issue found, show the file, line, and a fix.”

Why it matters: “Review for bugs” is too broad, and you will get vague suggestions. The better prompt creates a checklist of specific bug categories relevant to .NET APIs. Claude Code can methodically check each one.

7. .NET Version Migration

Bad:

“Upgrade my project to .NET 10.”

Better:

“Migrate this solution from .NET 9 to .NET 10. Update all <TargetFramework> to net10.0. Bump every Microsoft.* and EntityFrameworkCore.* package to its 10.0.0 release. Run dotnet build after each phase and fix only errors directly caused by the migration - do not refactor or ‘modernize’ anything that already builds. Surface any deprecation warnings in a list at the end for me to triage. Do not change application code beyond what the build requires.”

Why it matters: Migrations are where Claude Code’s “helpfulness” causes the most damage. Without explicit constraints, it will rewrite primary constructors, swap out logging, modernize patterns - all under the cover of “while I was in there.” The better prompt sets a strict verification bar (each phase must build) and explicitly forbids opportunistic refactors. The deprecation triage list keeps you in the loop without forcing an immediate decision on every warning.

8. Architecture Decisions

Bad:

“Should I use CQRS?”

Better:

“Read the current project structure in Features/. I am considering splitting read and write operations into separate handlers with MediatR. Current state: each feature folder has one handler doing both reads and writes. Evaluate whether CQRS would benefit this project given its size (~15 features, single database, no event sourcing). List specific tradeoffs for THIS project, not generic CQRS pros/cons.”

Why it matters: Architecture questions without project context produce textbook answers. The better prompt gives Claude Code the project’s actual structure and constraints, so the answer is specific to your codebase.

9. Performance Optimization

Bad:

“Make my API faster.”

Better:

“The GET /api/products endpoint returns 50 items per page but takes 800ms. Read Features/Products/GetProductsHandler.cs and Data/Configurations/ProductConfiguration.cs. Profile the generated SQL by enabling EF Core query logging. Check: (1) is AsNoTracking used for read queries, (2) are there missing database indexes on filtered/sorted columns, (3) is Select projection used instead of loading full entities, (4) are related entities loaded efficiently. Target: under 200ms for 50 items.”

Why it matters: “Make it faster” has no baseline and no target. The better prompt gives the current performance (800ms), the target (200ms), and the specific optimization checklist for EF Core queries.

10. Docker and Deployment Tasks

Bad:

“Dockerize my app.”

Better:

“Create a multi-stage Dockerfile for the API. Use mcr.microsoft.com/dotnet/sdk:10.0 for build and mcr.microsoft.com/dotnet/aspnet:10.0 for runtime. Run as non-root user. Expose port 8080. Add a .dockerignore excluding bin/, obj/, .git/, .vs/. Only create the Dockerfile and .dockerignore - do not modify any .csproj or Program.cs. Do not add health check endpoints unless I ask. Do not generate docker-compose.yml.”

Why it matters: “Dockerize” invites Claude Code to do everything - Dockerfile, compose file, health checks, environment variable refactors, even Kubernetes manifests if you have a k8s/ folder. The better prompt scopes the deliverable to exactly two files and explicitly forbids the helpful additions that turn a 5-minute task into a 50-minute review.

Decision Matrix: Plan Mode vs Direct Prompt vs Skill vs Subagent

Not every task needs the same interaction mode. Here is the decision matrix I use daily:

CriteriaDirect PromptPlan ModeSkillSubagent
Task sizeSmall, single fileLarge, multi-fileRepeatable workflowIndependent, parallelizable
ConfidenceHigh, you know exactly what you wantLow, need to explore firstHigh, you have done this beforeHigh, tasks do not depend on each other
Files affected1-3 files5+ filesDefined by the skill templateMultiple independent areas
RiskLow, easy to revertHigh, hard to undoLow, pattern is provenMedium, each agent works in isolation
TimeSecondsMinutes (thinking + execution)Seconds (after skill is built)Minutes (parallel execution)
Example”Add a property to ProductResponse""Design the caching layer for this API”/endpoint CreateOrder”Write tests for all 5 handlers in parallel”

When to Use Each Mode

Direct prompt: You know the exact change. One file, one task, high confidence. Most of your daily prompts should be direct.

Plan mode: You are unsure about the approach, the task spans multiple files, or the change is hard to revert. Plan mode thinks first, then executes. No wasted code. I use plan mode for anything that touches 5+ files or involves an architectural decision.

Skill: You have done this task 3+ times and the steps are always the same. Building a skill takes 10 minutes but saves hours over weeks. I have skills for creating endpoints, writing tests, running database migrations, and generating social media posts.

Subagent: You have multiple independent tasks that do not depend on each other. Instead of running them sequentially, subagents run in parallel. I use them for writing tests across multiple feature folders simultaneously.

My take: Default to direct prompts. Reach for plan mode when you are uncertain. Build a skill when you catch yourself repeating a workflow. Use subagents when you need parallelism. Most developers over-use plan mode for simple tasks and under-use skills for repetitive ones.

Advanced Techniques

Extended Thinking

For complex tasks, Claude Code can “think” before responding. According to Anthropic’s prompt engineering guide, giving Claude complex, multi-constraint prompts activates deeper reasoning. I trigger this for architecture decisions and multi-step refactors. When Claude Code needs to reason through tradeoffs, it activates its extended thinking automatically.

File References with @

Use @ to point Claude Code at specific files:

“Read @Features/Products/CreateProductHandler.cs and add the same validation pattern to @Features/Orders/CreateOrderHandler.cs”

This is faster than typing full paths and ensures Claude Code reads the exact files you mean.

Piping Data into Claude Code

You can pipe data directly into Claude Code from the terminal:

Terminal window
dotnet test --logger "console;verbosity=detailed" 2>&1 | claude "Read these test results. Fix all failing tests. The test project is at tests/InventoryAPI.Tests/"
Terminal window
dotnet build 2>&1 | claude "Fix all build errors. Start with the first error, later errors are often caused by earlier ones."

This is incredibly powerful for debugging. Claude Code sees the exact error output, not your description of it.

Screenshot and Image Input

Claude Code can read screenshots. When a UI does not match what you expect:

“Read this screenshot @screenshot.png. The product list page shows prices without currency formatting. Read Pages/Products/Index.razor and fix the price display to use .ToString('C') formatting.”

Compact Mode

For large projects, Claude Code’s context window can fill up during long sessions. Use /compact to summarize the conversation and free up context. I run /compact after every major task completion. It keeps Claude Code’s responses fast and focused.

My Take: What I Learned After 6 Months of Claude Code for .NET

Here is what actually moves the needle, based on daily use across multiple .NET projects:

1. Specificity beats length. A 3-line prompt with exact requirements and clear scope outperforms a 20-line prompt with vague descriptions. Claude Code does not need your life story. It needs the bar and the boundary.

2. Constraints are the new prompt engineering. Six months ago, the leverage was in pointing Claude Code at the right file. Today, Claude Code finds the right file on its own - the leverage has moved to scope and non-goals. “Do not modify Program.cs”, “Do not add new packages”, “Do not refactor surrounding code”. Every great prompt in 2026 has at least one explicit non-goal.

3. Bound the blast radius. The most expensive mistake is letting Claude Code be “helpful” beyond what you asked. An AI that changes 15 files when you asked for 3 is not helpful - it is a 45-minute code review you did not budget. Always specify which files are in scope and which are off-limits.

4. The 4-layer hierarchy pays compound interest. It took me one afternoon to set up CLAUDE.md, rules, and three skills for my main project. That investment has saved me hundreds of repeated keystrokes across six months. The return compounds with every prompt.

5. Plan mode is not optional for big tasks. I learned this the hard way after Claude Code generated a caching layer that conflicted with the existing Response Compression middleware. Now, anything that touches 5+ files goes through plan mode first. The thinking time saves rework time.

6. Prompt like a tech lead giving a ticket. The best mental model for Claude Code prompts is a well-written Jira ticket: clear acceptance criteria, links to relevant code, constraints on scope, and expected output. If your prompt would be a bad ticket, it will be a bad prompt.

Skip the Setup: The .NET Claude Kit

Everything in this guide - the CLAUDE.md, the rules, the skills, the specialist agents - took me months to build and refine. I open-sourced the entire system as the .NET Claude Kit so you do not have to start from scratch.

Here is what the kit gives you out of the box:

ComponentCountWhat It Does
Skills47Pre-built knowledge for EF Core, minimal APIs, Docker, testing, authentication, caching, and more
Specialist Agents10api-designer, ef-core-specialist, test-engineer, security-auditor, devops-engineer, and 5 more. Claude routes to the right agent automatically.
Slash Commands16/scaffold, /verify, /tdd, /code-review, /build-fix, /health-check, /security-scan, and more
Rules10Always-loaded conventions for modern C# 14, .NET 10 patterns, async best practices
Roslyn MCP Tools15Semantic code navigation that saves ~10x tokens. find_symbol locates a type in 30-50 tokens instead of grepping files at 500+
Hooks7Automated workflows that trigger on specific events
Project Templates5Drop-in CLAUDE.md templates for Web API, Modular Monolith, Blazor, Worker Service, and Class Library

The kit implements the 4-layer hierarchy from this guide. The CLAUDE.md templates handle Layer 1 (project context). The 10 rules handle Layer 2 (targeted behavior). The 47 skills and 16 slash commands handle Layer 3 (reusable workflows). Your session prompts stay lean because the kit carries the context.

Quick Start

Install the Roslyn MCP server and load the kit:

Terminal window
dotnet tool install -g CWM.RoslynNavigator
claude --plugin-dir ./dotnet-claude-kit

Or run /dotnet-init inside Claude Code to auto-detect your project and generate a customized CLAUDE.md.

The kit also includes an architecture advisor that asks 15 questions about your project (domain complexity, team size, project lifetime) and recommends one of four architectures: Vertical Slice, Clean Architecture, DDD, or Modular Monolith. It is the decision matrix from this article, automated.

The kit is free, MIT licensed, and available on GitHub. Check the full landing page for a deep dive into every component.

Troubleshooting: Common Prompt Failures and Fixes

Claude Code Ignores My CLAUDE.md

Cause: The CLAUDE.md file is not in the project root, or Claude Code was started from a different directory.

Fix: Always start Claude Code from the directory containing CLAUDE.md. Verify with ls CLAUDE.md in the Claude Code terminal. If using a monorepo, place CLAUDE.md at the repo root, not inside a subfolder.

Claude Code Generates Code That Does Not Match My Patterns

Cause: In 2026, Claude Code usually picks up your conventions automatically. If it is missing them, your CLAUDE.md is too thin or your conventions are inconsistent across the codebase.

Fix: Document the convention explicitly in CLAUDE.md (“vertical slice architecture, one folder per feature, no repository pattern”). If your codebase has inconsistent patterns, Claude Code will pick whichever it sees first - in that case, name the canonical file in the prompt: “Match the convention used in Features/Products/” so Claude Code anchors on the right example.

Claude Code Modifies Files I Did Not Ask It to Touch

Cause: The prompt scope is too broad (“refactor the project”) without constraints.

Fix: Add explicit scope constraints: “Only modify files in Features/Orders/. Do not touch Program.cs or any shared services.” Use git worktrees for risky operations so unwanted changes are isolated.

Claude Code Uses Outdated .NET Patterns

Cause: Without version context, Claude Code may default to older patterns (explicit constructors, Startup.cs, Swagger).

Fix: Specify the .NET version in CLAUDE.md and in session prompts: “This is a .NET 10 project. Use minimal APIs, primary constructors, and Scalar for API documentation.”

Claude Code’s Context Window Fills Up During Long Sessions

Cause: Long conversations with many file reads and large code blocks consume the context window.

Fix: Run /compact after completing each major task. Start new Claude Code sessions for unrelated tasks. Keep prompts focused on one task at a time.

Claude Code Generates Correct but Overly Complex Code

Cause: The prompt does not specify simplicity constraints.

Fix: Add “keep it simple” constraints: “Use the simplest approach that works. No abstractions unless there is a clear second use case. No helper utilities for one-time operations.”

Key Takeaways

  • Use the 4-layer hierarchy: CLAUDE.md for project context, rules for patterns, skills for workflows, session prompts for one-time tasks. CLAUDE.md is the single highest-leverage file in any Claude Code project.
  • Constraints over descriptions: Claude Code already mimics your existing patterns. The leverage is in scope and non-goals - “Only modify Features/Orders/”, “Do not add new packages”, “Do not refactor surrounding code”.
  • Always specify a verification bar: Numbers (P99 under 200ms), boolean checks (build passes after each phase), or approval gates (propose first, then I approve) - all prevent Claude Code from over-delivering.
  • Bound the blast radius: Every prompt should answer two questions - what files are in scope, and what is explicitly off-limits.
  • Default to direct prompts: Plan mode for uncertain tasks, skills for repeatable ones, subagents for parallel work.
  • Prompt like a tech lead: Clear acceptance criteria, scope constraints, non-goals, and a verification bar.
What is prompt engineering for Claude Code?

Prompt engineering for Claude Code is the practice of writing precise instructions that define scope, constraints, and verification bars - so Claude Code does exactly what you asked and stops there. Claude Code already discovers and mimics your existing patterns automatically, so the leverage in 2026 is no longer in describing your architecture but in bounding what Claude Code is allowed to change.

How do I write effective prompts for Claude Code in .NET?

Write effective prompts by following three principles: specify exact requirements (TTL of 5 minutes sliding, P99 under 200ms), bound the scope (only modify Features/Orders/), and include explicit non-goals (do not modify Program.cs, do not add new packages, do not refactor surrounding code). Set up CLAUDE.md with your .NET version, architecture, and conventions so you do not repeat this context in every prompt.

What is the difference between CLAUDE.md and session prompts?

CLAUDE.md contains persistent project context that applies to every task: tech stack, architecture decisions, coding conventions, and file paths. Session prompts contain one-time instructions for the specific task at hand. CLAUDE.md is loaded automatically on every Claude Code session, so instructions there never need repeating. If you type the same thing in three session prompts, move it to CLAUDE.md.

When should I use plan mode vs direct prompts in Claude Code?

Use direct prompts for small, high-confidence tasks affecting 1-3 files. Use plan mode when you are unsure about the approach, the task spans 5 or more files, or the change is hard to revert. Plan mode thinks through the approach before writing code, preventing wasted effort on complex tasks like designing a caching layer or refactoring an architecture.

How do I get Claude Code to follow my coding standards?

Add your coding standards to CLAUDE.md at the project root and to rule files in .claude/rules/. For example, create .claude/rules/ef-core.md with rules like always use async methods, always pass CancellationToken, and use AsNoTracking for read queries. Claude Code loads these automatically on every session and applies them to all generated code.

What are the best Claude Code prompts for C# developers?

The best prompts in 2026 follow the constraint-first pattern: state the exact requirement, bound the scope, and add explicit non-goals. For example: Add HybridCache to GetProductsHandler with TTL 5 minutes sliding. Invalidate on writes in CreateProductHandler and UpdateProductHandler. Do not modify Program.cs beyond registering AddHybridCache if missing. Do not add packages other than Microsoft.Extensions.Caching.Hybrid. Do not refactor surrounding code. This pattern prevents Claude Code from being over-eager - the most common failure mode now.

How do I prevent Claude Code from making common .NET mistakes?

Add preventive rules to CLAUDE.md or .claude/rules/. Common rules for .NET: always use async EF Core methods, never use .Result or .Wait on async calls, always pass CancellationToken, use AsNoTracking for read queries, use primary constructors, and target the specific .NET version (e.g., .NET 10). These rules catch mistakes before they happen.

Can Claude Code handle large .NET solutions with multiple projects?

Yes, Claude Code handles large solutions well when you provide structure in CLAUDE.md. Document the solution layout (which project does what), key file paths, and inter-project dependencies. For very large solutions, use scope constraints in prompts to focus Claude Code on specific projects. Use compact mode between tasks to manage context window usage, and consider subagents for parallel work across independent projects.

That covers everything I know about writing effective prompts for Claude Code in .NET projects. The 4-layer hierarchy, the 10 Bad vs Better patterns, and the decision matrix: these are the tools that took me from frustrated prompt-and-pray to consistent first-try results.

Start with CLAUDE.md. Add your tech stack, your conventions, and your architecture. Then apply the constraint-first pattern to your next prompt - bound the scope, name the non-goals, set the verification bar. You will feel the difference immediately. Or skip the setup and grab the .NET Claude Kit to get started in minutes.

Happy Coding :)

Want to reach 7,100+ .NET developers? See sponsorship options.

What's your Feedback?

Do let me know your thoughts around this article.

Weekly .NET tips, free

Free weekly newsletter

Stay ahead in .NET

Tutorials Architecture DevOps AI

Once-weekly email. Best insights. No fluff.

Join 7,100+ developers · Delivered every Tuesday

We value your privacy

We use cookies to improve your browsing experience, analyze site traffic, and personalize content. By clicking "Accept All", you consent to our use of cookies. Read our Privacy Policy