A comprehensive template for getting started with Context Engineering - the discipline of engineering context for AI coding assistants so they have the information necessary to get the job done end to end.
Context Engineering is 10x better than prompt engineering and 100x better than vibe coding.
π‘ Want to learn more about the philosophy behind this project? Read my introductory blog post: "Beyond the Prompt: A Practical Guide to Context Engineering with Gemini."
Before you can use this framework, you need to configure it with a Google Gemini API key.
You can get a free API key from Google AI Studio.
This framework uses an environment variable to securely access your API key. Open your terminal and run the appropriate command for your system:
- macOS / Linux:
export GEMINI_API_KEY="YOUR_API_KEY_HERE"
- Windows (PowerShell):
$env:GEMINI_API_KEY="YOUR_API_KEY_HERE"
Note: This variable is only set for your current terminal session. For a permanent solution, you'll need to add this line to your shell's startup file (e.g., .zshrc
, .bash_profile
, or your PowerShell profile).
The scripts use standard command-line tools to function. Please install them using your system's package manager.
-
jq
: For parsing JSON responses. -
awk
: For parsing the AI's plan. (Usually pre-installed on Linux/macOS). -
base64
: For encoding/decoding content. (Usually pre-installed on Linux/macOS).
Writing Effective INITIAL.md Files
Context Engineering represents a paradigm shift from traditional prompt engineering:
Prompt Engineering:
-
Focuses on clever wording and specific phrasing
-
Limited to how you phrase a task
-
Like giving someone a sticky note
Context Engineering:
-
A complete system for providing comprehensive context
-
Includes documentation, examples, rules, patterns, and validation
-
Like writing a full screenplay with all the details
-
Reduces AI Failures: Most agent failures aren't model failures - they're context failures
-
Ensures Consistency: AI follows your project patterns and conventions
-
Enables Complex Features: AI can handle multi-step implementations with proper context
-
Self-Correcting: Validation loops allow AI to fix its own mistakes
context-engineering-gemini/
β
βββ .gemini/ # Houses all AI-related tooling and templates.
β βββ scripts/
β β βββ generate-prp.sh
β β βββ execute-prp.sh
β βββ templates/
β βββ prp_template.md
β
βββ PRPs/ # Stores the detailed PRP blueprints generated by the AI.
β βββ weather_cli_prp.md
β βββ ...
β
βββ examples/ # Your code examples for the AI to follow.
β
βββ src/ # Your application's source code lives here.
β βββ weather/ # Python Example
β β βββ api.py
β β βββ cli.py
β βββ linkchecker/ # Go Example
β βββ main.go
β βββ parser.go
β βββ validator.go
β
βββ tests/ # Unit tests for your source code.
β βββ test_weather.py
β βββ linkchecker_test.go
β
βββ .gitignore
βββ GEMINI.md # Global rules and principles for the AI assistant.
βββ INITIAL.md # A blank template for writing new feature requests.
βββ INITIAL_EXAMPLE.md # An example of a completed feature request.
βββ README.md # This file.
Future Directions:
This template provides a robust foundation for Context Engineering. The next evolution of this workflow could involve integrating more advanced AI techniques, such as Retrieval-Augmented Generation (RAG) for automated documentation research and AI-driven tools (function calling) for a fully autonomous implementation and testing loop.
This framework uses a powerful, two-step workflow to build software with Gemini.
To start a new feature, you don't edit the INITIAL.md
template directly. Instead:
-
Copy
INITIAL.md
to a new file namedrequest.md
. -
Fill out
request.md
with the details of your new feature.
Now, run the generate-prp.sh
script. This script sends your feature request to Gemini and asks it to act as a senior engineer, creating a detailed technical blueprint called a Product Requirements Prompt (PRP).
- For macOS / Linux:
# Make the script executable first
chmod +x .gemini/scripts/generate-prp.sh
# Run the script
./.gemini/scripts/generate-prp.sh request.md
- For Windows (using a bash interpreter like Git Bash):
bash ./.gemini/scripts/generate-prp.sh request.md
This will create a new, permanent PRP file inside the PRPs/
directory.
This is where the magic happens. Run the execute-prp.sh
script with the path to the newly created PRP.
- For macOS / Linux:
# Make the script executable first
chmod +x .gemini/scripts/execute-prp.sh
# Run the script
./.gemini/scripts/execute-prp.sh PRPs/request_prp.md
- For Windows (using a bash interpreter like Git Bash):
bash ./.gemini/scripts/execute-prp.sh PRPs/request_prp.md
This script acts as an AI agent:
-
It sends the detailed PRP to Gemini to get a step-by-step implementation plan.
-
It parses the AI's response, identifying shell commands to run and code files to create.
-
It then executes this plan step-by-step, pausing to ask for your confirmation before running any command or writing any file.
This allows you to sit back and supervise as the AI builds the entire feature for you, right in your local terminal.
The INITIAL.md
file is the starting point for any new feature. The quality of your input here directly impacts the quality of the AI's output. Hereβs how to make it effective:
-
π― The Goal: Be specific and clear. Instead of "make a login page," write "create a login page with email/password fields, a 'Forgot Password' link, and Google OAuth integration." The more detail, the better.
-
π¨ Inspiration & Examples: This is one of the most powerful sections. If you have an existing file that shows how you handle API clients, database connections, or error handling, reference it here. It gives the AI a concrete pattern to follow, ensuring consistency.
-
π Required Knowledge: Don't make the AI guess. Provide direct links to the exact API documentation, libraries, or Stack Overflow threads it will need. This saves time and prevents the AI from using outdated or incorrect information.
-
β οΈ Potential Pitfalls & Gotchas: Think about what might go wrong. Does the API have a weird rate limit? Is there a tricky authentication flow? Mentioning these upfront helps the AI avoid common mistakes and build more robust code from the start.
The PRP (Product Requirements Prompt) workflow is a two-step process designed to ensure the AI has a comprehensive plan before writing a single line of code.
-
Generation (
generate-prp.sh
): This first step is about planning. The script takes your high-levelINITIAL.md
feature request and asks Gemini to expand it into a detailed technical blueprint (the PRP). This blueprint includes a proposed file structure, a task breakdown, pseudocode, and a validation plan. It forces the AI to think through the entire implementation first. -
Execution (
execute-prp.sh
): This second step is about implementation. The script takes the detailed PRP and sends it back to Gemini with a clear instruction: "Build this." Because all the research and planning is already done, the AI can focus solely on writing clean, correct code that follows the blueprint.
This separation of planning from implementation is the key to the workflow's success.
The examples/
folder is your secret weapon for ensuring project consistency. AI assistants excel at pattern recognition.
-
Complete Patterns: Show a full, working example of a pattern, not just a snippet. For instance, provide a complete API client class, not just one function.
-
Code Structure: Include examples of how you structure your classes, organize your imports, and name your variables.
-
Error Handling: Show how you expect errors to be caught and handled. This is often overlooked but is critical for production-quality code.
-
Testing: Provide an example of a test file (
test_*.py
) that shows your preferred testing style, including how to use mocks.
The more high-quality examples you provide, the less the AI has to guess, and the more the final code will look like you wrote it yourself.
-
Be Explicit: Never assume the AI knows your preferences. The more explicit you are in your
INITIAL.md
andGEMINI.md
files, the better the result will be. -
Iterate on the Process: If the AI makes a mistake, don't just fix the code. Think about why it made the mistake. Does a rule in
GEMINI.md
need to be clearer? Do you need a better example in theexamples/
folder? Improving the process will prevent the same mistake from happening again. -
Trust the Workflow: It might seem like extra work to write a detailed
INITIAL.md
and review a PRP, but this upfront investment saves a massive amount of time on debugging and refactoring later.
This project's workflow and the core concepts of Context Engineering were inspired by the original Context-Engineering-Intro repository by coleam00. While the code and templates have been rewritten for a Gemini-based workflow, the foundational ideas come from that excellent project.