AI & Code

AI Prompting Tips for Better Code: 10 Techniques That Work

AI Prompting Tips for Better Code: practical prompt engineering techniques to produce cleaner, tested, and faster code using AI assistants in real projects.

AI Prompting Tips for Better Code: 10 Techniques That Work

AI Prompting Tips for Better Code: 10 Techniques That Work

Welcome to Episode 3 of our Vibe Coding 101 series. As we covered in our previous guide, "What Is Vibe Coding? The Complete Beginner's Guide," and in Episode 2 where you built your first AI-assisted portfolio site, this installment focuses on practical prompt-engineering techniques specifically for coding with AI. These are hands-on methods you can use right now with tools like ChatGPT, GitHub Copilot, Anthropic Claude, or Google Gemini (check vendor sites for the latest plan and version details).

Below are 10 techniques that consistently improve the code AI returns. For each, you’ll find a short explanation, a before prompt + resulting code sample, and an after prompt + improved code sample. Use these as templates, tweak for your stack, and iterate.

A laptop screen showing a code editor with a cute orange crab plush toy beside it.

Photo by Daniil Komov on Pexels | Source

1) Specify the exact output format

Why: AI models are more predictable when you constrain output format (JSON, function only, diffs).

Before

  • Prompt: "Write a Python function to reverse a string."
  • Resulting code:

def reverse(s): return s[::-1]

After

  • Prompt: "Return only a single Python function (no explanation). Add type hints, a docstring, and a unittest."
  • Resulting code:

def reverse(s: str) -> str: """Return the reversed string.""" return s[::-1]

import unittest

class TestReverse(unittest.TestCase): def test_basic(self): self.assertEqual(reverse('abc'), 'cba')

if name == 'main': unittest.main()

2) Give context and constraints

Why: Tell the model the project type, Python version, frameworks, or performance constraints.

Before

  • Prompt: "Make an API endpoint that returns users."
  • Resulting code (generic):

@app.route('/users') def users(): return jsonify(User.query.all())

After

  • Prompt: "You are a Flask backend engineer targeting Python 3.11. Return a paginated /users JSON endpoint, limit page size to 50, and include total count. Use SQLAlchemy and marshmallow. Only provide the endpoint function."
  • Resulting code:

@bp.route('/users') def get_users(): page = int(request.args.get('page', 1)) per_page = min(int(request.args.get('per_page', 20)), 50) q = User.query total = q.count() users = q.paginate(page, per_page, False).items return jsonify({ 'total': total, 'page': page, 'per_page': per_page, 'users': UserSchema(many=True).dump(users) })

3) Use role and persona prompts

Why: Asking the model to adopt a role (senior engineer, code reviewer) yields higher-quality, style-consistent output.

Before

  • Prompt: "Write SQL to find duplicate emails."
  • Resulting code:

SELECT email FROM users GROUP BY email HAVING COUNT(*) > 1;

After

  • Prompt: "You are a senior data engineer. Provide optimized PostgreSQL to list duplicate emails and include user ids (limit output to 100 rows). Explain indexing suggestions after the query."
  • Resulting code and notes:

SELECT email, array_agg(id) AS ids FROM users GROUP BY email HAVING COUNT(*) > 1 LIMIT 100;

-- Suggestion: add an index on lower(email) for case-insensitive searches: CREATE INDEX ON users (lower(email));

4) Ask for tests and edge cases

Why: Tests force correctness and cover common pitfalls.

Before

  • Prompt: "Implement a merge sort in JavaScript."
  • Resulting code (no tests):

function mergeSort(arr) { /* ... */ }

After

  • Prompt: "Implement mergeSort in JS (ES2020), include unit tests (Jest), and handle non-array inputs gracefully."
  • Resulting code:

function mergeSort(arr) { if (!Array.isArray(arr)) throw new TypeError('Expected array'); if (arr.length < 2) return arr; const mid = Math.floor(arr.length / 2); return merge(mergeSort(arr.slice(0, mid)), mergeSort(arr.slice(mid))); }

// Jest tests ntest('mergeSort sorts numbers', () => { expect(mergeSort([3,1,2])).toEqual([1,2,3]); });

5) Request incremental steps (stepwise refinement)

Why: Breaking work into steps avoids big, incorrect outputs.

Before

  • Prompt: "Create a CLI tool to upload files to S3 in Go."
  • Resulting code: a long, error-prone file.

After

  • Prompt: "Plan steps to build a Go CLI uploader: 1) parse flags, 2) validate file, 3) stream to S3 with retry. Return only the plan. Then (after I confirm) produce code for step 1."
  • Resulting action: model returns a clear 3-step plan and produces focused, tested code for step 1.

6) Ask for complexity and trade-offs

Why: When you request time/space complexity or trade-offs, the model writes more considered code.

Before

  • Prompt: "Write a caching layer in Node.js."
  • Resulting code: simple in-memory cache.

After

  • Prompt: "Implement an LRU cache in Node.js, list complexity (get/set), and discuss memory vs persistence trade-offs."
  • Resulting code:

class LRUCache { constructor(cap) { this.cap = cap; this.map = new Map(); } get(k) { if (!this.map.has(k)) return null; const v = this.map.get(k); this.map.delete(k); this.map.set(k, v); return v; } set(k, v) { if (this.map.has(k)) this.map.delete(k); else if (this.map.size === this.cap) this.map.delete(this.map.keys().next().value); this.map.set(k, v); } }

-- Complexity: get/set O(1). Trade-offs: in-memory is fast but volatile; use Redis for persistence.

A dark-themed chat interface displaying an AI assistant conversation starter on a screen.

Photo by Matheus Bertelli on Pexels | Source

7) Provide concrete examples and representative inputs

Why: Examples anchor the model to real data formats and edge cases.

Before

  • Prompt: "Normalize phone numbers."
  • Resulting code: naive string replace.

After

  • Prompt: "Normalize international phone numbers to E.164. Example inputs: '+1 (555) 123-4567', '0044 20 7946 0958', '02079460958' (assume UK default). Return Python function using phonenumbers library and tests."
  • Resulting code:

import phonenumbers

def to_e164(number, default_region='GB'): p = phonenumbers.parse(number, default_region) if not phonenumbers.is_valid_number(p): raise ValueError('Invalid number') return phonenumbers.format_number(p, phonenumbers.PhoneNumberFormat.E164)

8) Limit scope and enforce style

Why: Ask for PEP8, Prettier, or specific linters to ensure consistent output.

Before

  • Prompt: "Create a React component for a login form."
  • Resulting code: inconsistent style and no tests.

After

  • Prompt: "Write a React (v18) functional LoginForm using TypeScript, TailwindCSS classes, and include PropTypes-equivalent types. Keep lines under 100 characters and include a simple RTL test. Only return the component file."
  • Resulting code: a tidy, typed component file matching the style constraints.

9) Request diffs or patches for updates

Why: When refactoring, ask for a unified diff or patch so you can apply changes safely.

Before

  • Prompt: "Refactor this file." (model returns full file with unknown edits)

After

  • Prompt: "Given this file, output a git-style unified diff that replaces the current logger with structured JSON logger. Only output the diff."
  • Resulting code: a clean unified diff you can feed to patch or review in PR.

10) Combine prompts with tool-integrated checks

Why: Use the AI in a loop with CI checks, linters, and tests. Ask the model to produce updates until tests pass.

Before

  • Prompt: "Make tests pass for failing suite." Result: one-off fixes.

After

  • Prompt: "Run the failing tests (I’ll paste errors). For each failure, propose a minimal patch (unified diff). After I apply it, re-run tests until green. Prefer minimal, well-documented changes."
  • Resulting interaction: cyclical refinements that produce targeted fixes and explanations.

Two men analyzing code on computers in a modern office setting.

Photo by Mikhail Nilov on Pexels | Source

Practical notes on tools and pricing (Feb 2026)

Popular tools for AI-assisted coding include:

  • ChatGPT (OpenAI) — ChatGPT Plus historically at $20/mo for individuals; enterprise plans and model families vary, so check the official site for current tiers.
  • GitHub Copilot — consumer plans have been around $10/mo; Copilot for Business/Enterprise has additional features and billing.
  • Anthropic Claude and Google Gemini — both offer tiers and integrations; pricing and latest model versions change frequently.

Tip: pick the model that fits your workflow (local IDE vs cloud), and verify the current model version and pricing on each vendor's official pages before committing to an enterprise plan.

Quick checklist

  • Be explicit about format and constraints.
  • Use persona and role prompts for higher-quality output.
  • Ask for tests and edge cases every time.
  • Prefer diffs/patches for refactors.
  • Iterate: stepwise prompts reduce large mistakes.

Closing

These 10 techniques turn vague prompts into reproducible, testable code. Try them while building the small app from Episode 2 — you’ll see immediate improvements in reliability and readability. In Episode 4 we’ll cover using AI for architecture design and trade-offs.

Frequently Asked Questions

How do I choose which AI tool to use for coding?

Pick the tool that integrates into your workflow (IDE plugin, web UI, or API), supports your language/framework, and fits your budget. Test a small task across 2–3 providers to compare output quality and cost.

Should I trust AI-generated code in production?

Treat AI output as a developer assistant: review, test, and audit code before production. Always include unit/integration tests and security reviews for critical paths.

What’s the fastest way to get better results from prompts?

Be specific: provide context, examples, desired output format, and ask for tests. Iteratively refine prompts and request diffs for refactors to keep changes minimal.

Vibe Coding 101

Episode 3 of 5

  1. 1What Is Vibe Coding? The Complete Beginner's Guide
  2. 2Your First App with AI: Build a Portfolio Site in 30 Minutes
  3. 3AI Prompting Tips for Better Code: 10 Techniques That Work
  4. 4How to Debug with AI: Turn Errors into Solutions
  5. 5Vibe Coding vs Traditional Coding: Key Differences Explained
#ai code prompting techniques#prompt engineering for coding#improve ai code prompts#ai-assisted coding best practices#debugging with ai prompts#contextual prompts for coding
Share

Related Articles