I Built a PCF Control with AI - Here's Every Prompt I Used
The exact AI prompts, failures, and iterations behind a production PCF control with 90+ unit tests and config-driven architecture.
Everyone talks about AI writing code. Almost nobody shows you the actual prompts.
I’ve read dozens of “I used AI to build X” posts. They all follow the same pattern: vague description of asking AI to do something, screenshot of the result, breathless caption about productivity. No prompt text. No failed attempts. No iteration. Just vibes.
So here’s the real version. I built Field Audit History - a PCF control that puts an inline audit trail next to every field on a Dynamics 365 form. It has 90+ unit tests, a config-driven architecture, works on any Dataverse entity, and ships as a managed solution. I built it with Claude Code as my pair programmer.
These are the actual prompts I used. The ones that worked and the ones that produced garbage.
What the Control Needed to Do
The problem was simple. Dataverse audit history exists but nobody uses it because it takes five clicks and speaks in schema names. I wanted a clock icon next to each audited field. Click it, see who changed what, when. Restore old values. Export to CSV. All inline, no navigation.
That’s a React app running inside a PCF control, calling the Dataverse Web API for audit records and entity metadata, rendering a popup and a side panel, with config loaded from a JSON web resource.
Not a weekend project. Not a “let AI do it all” project either.
Prompt 1: Scaffolding the PCF Project
My first prompt was the most important one, because it set the architecture for everything after.
Create a new PCF control project called FieldAuditHistory. Use React
for rendering. The control binds to a single text field on a
model-driven app form. It needs to:
- Read the bound entity's metadata to find which fields are audited
- Inject clock icons next to audited fields on the form
- On click, fetch audit history from the Dataverse Web API
- Show a popup with the change timeline for that field
Use TypeScript strict mode. Create the manifest, index.ts entry point,
and a React component structure with separate files for the popup,
timeline, and API layer.
This prompt worked well because it was specific about the technology choices and the component boundaries. I didn’t say “build me an audit control.” I said exactly what pieces I needed and how they connect.
What came back was a clean project skeleton. Manifest with one input property. An index.ts that bootstrapped React. Separate files for AuditPopup.tsx, AuditTimeline.tsx, and DataverseAuditApi.ts. TypeScript interfaces for audit records.
The AI didn’t know about some PCF-specific quirks. It created the React root using createRoot which is correct, but it missed that you need to destroy and recreate it on updateView calls. Small fix. The structure was solid.
Prompt 2: Dataverse Audit Web API Integration
This is where things got interesting. The Dataverse audit API is not well documented for client-side use in PCF controls. My prompt:
Write the DataverseAuditApi class. It needs to:
- Call /api/data/v9.2/audits with $filter on objectid and
attribute name
- Parse the audit response which has AttributeData with OldValue
and NewValue
- Resolve entity metadata to get display names for fields
- Handle lookup fields (show the display name, not the GUID)
- Handle optionset fields (show the label, not the integer)
- Cache metadata so we don't re-fetch on every click
- Return typed AuditEntry[] with human-readable values
The control runs inside context.webAPI - use retrieveMultipleRecords
for standard queries and fetch() with context.page.getClientUrl()
for audit-specific endpoints.
The AI produced a working API layer on the first pass. But it had two problems.
First, it used context.webAPI.retrieveMultipleRecords('audit', ...) for the audit query. That doesn’t work. The audit entity has special access patterns and you need to hit the REST endpoint directly. I had to tell it to switch to a raw fetch call against the audit endpoint.
Second, it hallucinated an AttributeData property on the audit response. The actual response structure uses _objectid_value, and the attribute changes come from a separate call to the audit detail endpoint. The shape of the JSON it assumed was wrong.
This is exactly the kind of thing AI gets wrong with less-documented APIs. It confidently generates code that looks right but calls endpoints that don’t exist or parses response shapes that don’t match reality.
My fix prompt:
The audit API doesn't work through context.webAPI for the audit
entity. Use fetch() directly against:
{clientUrl}/api/data/v9.2/audits?$filter=_objectid_value eq {recordId}
The response doesn't have AttributeData. Each audit record has
_objectid_value, createdon, _userid_value, and operation. To get the
actual field changes, you need to call:
/api/data/v9.2/audits({auditId})/Microsoft.Dynamics.CRM.RetrieveAuditDetails()
That returns an AuditDetail with OldValue and NewValue inside
AttributeAuditDetail. Fix the API class to use this two-step pattern.
After this correction, the API layer worked. The lesson: when you know the API, tell the AI the exact endpoint shape. Don’t let it guess.
Prompt 3: The React UI
For the popup and timeline components, AI was genuinely good. UI code is where it shines because there are thousands of examples of React components in its training data.
Build AuditPopup as a React component that:
- Appears anchored near the clicked field's clock icon
- Shows the last 8 audit entries for that specific field
- Each entry shows: user display name, date (relative like "3 days
ago"), old value → new value
- Has Copy and Restore buttons on each entry
- Copy puts the old/new value on clipboard
- Restore calls context.webAPI.updateRecord to write the old value back
- Include a "Deep Dive" link that opens a side panel with the full
audit timeline for all fields
- Use Fluent UI v9 components for buttons, tooltips, and the panel
- Match the Dynamics 365 form style - no jarring visual differences
This came back 90% right. The popup positioning, the timeline rendering, the relative dates, the Fluent UI integration. All clean.
The Restore function needed work. The AI wrote a simple updateRecord call but didn’t handle lookup fields (you need to set the @odata.bind navigation property, not the raw GUID), optionset fields (you set the integer, not the label), or confirmation dialogs (you absolutely need a “are you sure?” before writing to Dataverse).
I also had to add the logic for injecting clock icons into the form DOM. The AI understood React rendering but not how model-driven app forms work. It tried to render icons inside the PCF container div, when actually you need to find each field’s label element on the form and inject the icon as a sibling.
The clock icons need to be injected into the form DOM, not rendered
inside the PCF container. For each audited field:
1. Find the field's label element using
document.querySelector('[data-id="fieldname-field-label"]')
2. Create a small span with the clock SVG icon
3. Insert it as the last child of the label's parent container
4. Add a click handler that triggers the React popup positioned
near that icon
Handle fields that aren't rendered yet (tabs, sections). Use a
MutationObserver to detect when new fields appear and inject icons.
The MutationObserver pattern was something I had to specify. The AI wouldn’t have known that Dynamics 365 forms lazy-render fields in collapsed tabs.
Prompt 4: Config-Driven Architecture
This was the biggest architectural decision. I wanted the control to work on any entity without rebuilding. I’ve written about why this matters separately.
Refactor the control to be fully config-driven. Add a manifest
property called configWebResourceName (SingleLine.Text, optional).
During init(), if the property has a value:
1. Query Dataverse for the web resource by name
2. Decode the Base64 content
3. Parse as JSON
4. Deep merge with a DEFAULT_CONFIG object
The config should control:
- Which fields get icons (4 modes: audited, include, exclude, all)
- Per-table overrides with wildcard "*" default
- Feature toggles: allowRestore, allowCopy, allowExport
- Display settings: panelWidth, dateFormat, valuePreviewLength
- Pagination: pageSize, maxPages
- Every UI label (for localization)
Create a typed IConfig interface. Every property required, every
property has a default. Add runtime validation that warns on unknown
properties but doesn't crash.
This prompt produced excellent code. The AI understood the deep merge pattern, created clean TypeScript interfaces, and handled the Base64 decoding correctly. It even added schema version checking without me asking.
One thing it got wrong: the wildcard table matching. I wanted "*" to be the fallback for any table not explicitly configured. The AI’s first implementation required an exact table name match and fell back to defaults if missing. Close, but not what I described. A quick follow-up fixed it:
The tables config should support a "*" wildcard key as the default
for any table not explicitly listed. Lookup order: exact table name
match first, then "*" wildcard, then DEFAULT_CONFIG.
Prompt 5: Unit Tests
This is where AI paid for itself. Writing 90+ unit tests manually would have taken days. I wrote zero of them by hand.
Write comprehensive Jest unit tests for the FieldAuditHistory control.
Test files should mirror source files. Cover:
DataverseAuditApi:
- Successful audit fetch with multiple entries
- Empty audit response
- API error handling (403 no privileges, 404 not found, 500 server)
- Lookup field value resolution
- Optionset field label resolution
- Metadata caching (second call shouldn't fetch)
- Pagination (fetching next page)
ConfigLoader:
- Load and parse valid config
- Missing web resource (should use defaults)
- Partial config (deep merge with defaults)
- Invalid JSON (should warn and use defaults)
- Base64 decoding
- Wildcard table matching
- Per-table override precedence
AuditPopup:
- Renders correct number of entries
- Shows human-readable field names
- Copy button copies to clipboard
- Restore button shows confirmation
- Restore handles lookup fields correctly
- Deep Dive link opens panel
AuditTimeline:
- Filters by field name
- Filters by user
- Filters by date range
- Filters by operation type
- Collapsible groups
- CSV export generates correct format
Mock context.webAPI for all Dataverse calls. Mock fetch for direct
API calls. Mock clipboard API. Mock DOM for icon injection tests.
The AI generated 90+ tests across the test suite. About 85 of them passed on the first run.
The five that failed exposed real bugs in my code:
- Optionset restore was sending the label string instead of the integer value. The test tried to restore “Active” and the mock expected
1. My code was wrong, not the test. - CSV export wasn’t escaping commas in field values. The test had a value containing a comma and the CSV assertion caught the missing quotes.
- The metadata cache key didn’t include the entity name. If you opened audit popups on a subgrid after viewing the main form, the cached metadata was for the wrong entity. The test for “switching entities” caught this.
- Date range filter was using local time comparison against UTC timestamps. Off by one day depending on timezone. The test with a specific boundary date caught it.
- The confirmation dialog for Restore didn’t prevent double-clicks. The test fired two rapid click events and the mock detected two
updateRecordcalls.
Five bugs I didn’t know about. Found by AI-generated tests. That alone justified using AI for this project.
What Didn’t Work
Not every prompt produced usable code. Here’s what failed.
Hallucinated Dataverse APIs. Early on, the AI confidently generated calls to context.webAPI.retrieveAuditHistory(). This method doesn’t exist. It also invented context.utils.getEntityMetadata() with a signature that doesn’t match the real one. Any time you’re calling Dataverse APIs, verify the method signatures yourself.
DOM manipulation assumptions. The AI assumed model-driven app forms use standard HTML form elements. They don’t. Fields are rendered in a custom framework with specific data-id attributes, and the DOM structure changes between classic and new form rendering. I had to correct this multiple times.
Fluent UI version confusion. I asked for Fluent UI v9 but got a mix of v8 and v9 imports. @fluentui/react-components (v9) and @fluentui/react (v8) have completely different APIs. The AI mixed DefaultButton (v8) with makeStyles (v9) in the same file. I had to do a cleanup pass to make everything consistently v9.
Over-engineering. When I asked for “error handling,” the AI created an elaborate retry system with exponential backoff, circuit breakers, and a custom error boundary hierarchy. For a PCF control that makes two API calls. I replaced it with a try/catch and a user-friendly error message.
Wrong test framework setup. The first test generation assumed ts-jest with a specific tsconfig setup that conflicted with the PCF project’s TypeScript configuration. It took two rounds of prompt corrections to get Jest working with the PCF project structure. I ended up providing the working jest.config.js and telling the AI to write tests that match that config exactly.
The Iteration Pattern
After a few days of this, I developed a rhythm.
- 1
Start with architecture, not features
First prompt defines file structure, component boundaries, API patterns, and technology choices. No feature code yet.
- 2
One feature per prompt
Each follow-up adds exactly one capability. Audit fetch. Popup rendering. Config loading. Icon injection. Never two at once.
- 3
Include the constraints
Tell the AI what NOT to do. 'Don't use context.webAPI for audit queries.' 'Don't mix Fluent UI versions.' 'Don't add retry logic.' Constraints prevent the most common AI mistakes.
- 4
Test immediately, fix in the same session
Generate tests right after writing the feature. Run them. Fix failures while the context is fresh. Don't accumulate untested code.
- 5
Correct with specifics, not frustration
When AI gets something wrong, don't say 'this doesn't work.' Say 'The audit endpoint returns this shape: {exact JSON}. Rewrite the parser to match.'
The prompts that produced the best results shared three qualities: they were specific about the technical boundaries, they included real API shapes or data structures, and they told the AI what pattern to follow rather than what outcome to achieve.
The prompts that produced garbage were vague (“add error handling”), assumed AI knowledge of niche APIs (“use the Dataverse audit endpoint”), or asked for too many things at once (“build the entire UI with tests and config support”).
What I Actually Spent Time On
Here’s the breakdown of where my time went. Approximate self-reported time allocation, not stopwatch-measured:
| Activity | Time Spent | AI Contribution |
|---|---|---|
| Architecture decisions | 30% | 0% - all me |
| Writing and refining prompts | 20% | N/A |
| Reviewing and correcting AI output | 25% | Generated the starting point |
| Testing and debugging | 15% | Wrote 90+ tests, found 5 bugs |
| PCF packaging and deployment | 10% | 0% - manual process |
AI didn’t make this a one-day project. It made a two-week project into a one-week project. The time savings came from not writing boilerplate React components, not writing 90 test cases by hand, and not looking up Fluent UI component APIs.
The time cost came from correcting wrong assumptions, cleaning up inconsistent code, and fighting hallucinated APIs.
What This Means for PCF Development
I’ll keep using AI for PCF work. But I’ve stopped thinking of it as a code generator. It’s a pair programmer who knows React and TypeScript really well, knows Dataverse APIs approximately, and knows nothing about model-driven app form DOM structure.
That last gap is the important one. PCF development lives at the intersection of standard web development (where AI is strong) and Dynamics 365 platform specifics (where AI is weak). The more platform-specific your code, the more you’ll be correcting.
The skill that matters isn’t writing better prompts. Good prompts help, but they’re table stakes. The skill is reading AI output and knowing immediately whether it’s right. Recognizing that retrieveAuditHistory() doesn’t exist. Seeing that the DOM selector pattern won’t work on new forms. Catching that the optionset restore sends a string instead of an integer.
You need to know what good Dataverse code looks like before you can tell AI to write it. The people who will get the most value from AI-assisted PCF development are the people who could have built it without AI. They’ll just build it faster.
That’s the unsexy truth about AI-assisted development. The skill isn’t prompting. It’s knowing what good code looks like so you can tell when the AI misses.
VP365.ai - Power Platform tools for practitioners who ship. Follow Victoria for new controls and deep dives.
Stay in the loop
Get new posts delivered to your inbox. No spam, unsubscribe anytime.
Related articles
Copilot Studio vs Claude Code for Power Platform Development
An honest comparison from someone who uses both daily. When Copilot wins, when Claude Code wins, and why the $30/month question misses the point.
How AI Generates Unit Tests for PCF Controls: Patterns and Calibration
What AI-assisted test generation gets right, what it stumbles on, and how to calibrate the workflow for PCF controls. Notes from building Field Audit History.
How to Build Your First PCF Control in 2026
Step-by-step guide to building, testing, and deploying a PCF control with modern tooling. From pac pcf init to a working control on a D365 form.