Writing prompts that produce Marketplace-pass Magento code.
How much Magento context can I cram into one prompt before it falls over?
Claude Opus / Sonnet 4.x has a 200K token context (and a 1M variant). Practical limit for Magento work is closer to 40–60K tokens of relevant code — anything more and the model starts to confuse module boundaries. Strategy: don't paste the whole module; paste the interfaces + di.xml + schema.graphqls + the one file you want changed. Use @-references in Claude Code to lazy-load supporting files only when needed. Token-count cheat: 1 line of PHP ≈ 8 tokens, 1 line of XML ≈ 12, 1 line of GraphQL ≈ 6.
Was this helpful?
How does CLAUDE.md "anchoring" work in practice?
Anchoring = the model treats CLAUDE.md as ground truth and refers back to it when in doubt. The trick is to make the rules specific and negative: "Do not edit anything under vendor/. Do not use ObjectManager outside Setup/ or Test/. Do not write raw SQL — use db_schema.xml. Plugins beat preferences. Service contracts beat direct model access." Vague rules ("write clean code") get ignored; concrete prohibitions get followed. End every CLAUDE.md with a 5-line "DEFINITION OF DONE" checklist — the model will tick it off in its own response and you can verify visually.
Was this helpful?
Does role priming ("you are a senior Magento dev") actually help?
Marginally. The bigger lever is task framing: tell the model what the output will be reviewed against, not who it is. Compare "You are a senior Magento dev. Add a plugin to product save." against "Add a plugin to ProductRepository::save. The PR will be reviewed by phpcs --standard=Magento2, phpstan --level=6, and Adobe's EQP checklist. The Marketplace tech-review reads it next." The second framing produces strict-typed code, full DocBlocks, no ObjectManager calls, and proper exception handling — without ever saying "senior". Save the role priming for the sub-agents (e.g. a code-reviewer agent primed as an EQP reviewer).
Was this helpful?
What is an "output contract" in a prompt?
The contract pins down the shape of the response so you can verify it programmatically. Example for a Magento scaffold prompt: "Output exactly these files in this order: 1) etc/module.xml, 2) etc/di.xml, 3) Api/<Entity>RepositoryInterface.php, 4) Model/<Entity>Repository.php. Each file in its own <file> block with the absolute path as the first line. No prose between files." This lets a downstream script extract files with regex and write them straight to disk — no copy-paste, no hallucinated filenames. Combine with a hook that runs php -l on each extracted PHP file as the eval loop.
Was this helpful?
What does an "eval loop" look like for Magento prompts?
An eval loop is a tight write → verify → correct cycle, not a one-shot. Concrete Magento setup: (1) prompt produces files, (2) a PostToolUse hook runs php -l + vendor/bin/phpcs --standard=Magento2 + vendor/bin/phpstan analyse, (3) any failure feeds back into the conversation as a follow-up message automatically, (4) the model fixes it without you typing. Five iterations max; if it still fails, your CLAUDE.md is missing a rule. The eval loop is what turns Claude Code from "impressive demo" to "actually shipping". Without it you're back to copy-pasting code into Travis and waiting 10 minutes.
Was this helpful?
Request a quote
I'll reply within 2-4 hours business with a written quote and timeline.