-
Notifications
You must be signed in to change notification settings - Fork 46
[AIT-311] Add Swift example code for AI Transport (with Claude skill for automatic translation) #3192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
4eb3bdc to
8ed30ec
Compare
ac90ca2 to
fbeb027
Compare
fbeb027 to
634c292
Compare
When invoked as, for example: > /translate-examples-to-swift translate all the example code in @src/pages/docs/ai-transport it will translate all of the referenced examples to Swift, making sure to produce code which has been verified to compile. It also runs an independent verification subagent which reviews the correctness of the translation and performs an second compilation attempt. As part of the verification, it also generates a single-page app with a UI that makes it easy for a human to review the translations; you can then export a Markdown or JSON file with the feedback (for passing to Claude for it to iterate on this feedback). I don't yet have a great process for getting it to apply review feedback when starting from a fresh context; I've just been telling it something like "translations x were generated using this skill; now apply feedback y", but it doesn't do a great job of updating the translation JSON files (and thus the data displayed in the review app) to reflect the changes it's made without wiping out any unrelated notes from the original translation. The skill also gives Claude the ability to review Swift translations in isolation (i.e. not as part of a translation run and thus without the supporting artifacts). For this to work properly, we need to keep the context comments (added by the translation process) in the MDX files. I think that we should keep these _anyway_, because I think we should at some point consider setting up tooling to ensure that _all_ of our code examples in the docs repo actually are valid and compile. And this would be a stepping stone to that. Note that the sequential numbering of the examples within a file (e.g. streaming-1) might be a nuisance to maintain as we add further — interleaved — examples into a file; we can cross that bridge when we come to it. I wrote the original version of this skill and then got Claude to do some heavy iteration of it based on my feedback when testing. I haven't reviewed any of the skill's supporting files — i.e. the scripts or HTML or schemas — in any detail. As part of this change — the first shared addition to the .claude directory — I've changed the gitignore rules to only ignore local scope (definitions given in [1]). A few notes from Claude about some of the decisions we made: > During translation, I encountered two compilation errors due to unknown > types (ARTPublishResultSerial and ARTStringifiable). My process for > resolving them was inefficient: > > 1. Fetched the auto-generated SDK docs at ably.com/docs/sdk/cocoa/v1.2/ > 2. The class page didn't show the methods/types I needed > 3. Gave up after two requests and tried to fetch from GitHub > > The user pointed out that after running `swift build`, the ably-cocoa > source is already available locally in .build/checkouts/. I used > find and grep to locate the header files, which had the exact type > definitions I needed. This was faster and more reliable than web fetches. > The test harness comment (showing the function signature and > parameters) serves two purposes: > > 1. Reviewers can verify the translation compiles correctly > 2. Future editors can modify the Swift code and test compilation without > having to reverse-engineer what context was originally used > Use per-subagent harness directories and add consolidation script > > Problems solved: > - Parallel subagents would clobber each other writing to shared harness directory > - Manual JSON merging burned context and was error-prone A few things that could be improved in the future (I had to draw a line under this task at some point): - the review app for some reason requires that you click twice on the "Flag" or "Approve" button before it collapses the element - the review app's exported Markdown file's references are done by line number, which is a slightly meaningless value given that we're inserting new code into the file as part of translation; switch it to use IDs like the JSON example - make the review app accept multi-line comments - we may be able to simplify the test harness by instead using Swift's "MainActor isolation by default" mode [1] https://code.claude.com/docs/en/settings#configuration-scopes
Done using the Claude skill added in 459a3a0. I've reviewed the translations. Decisions: - Have not translated the tool call progress examples that use LiveObjects; we agreed we'll leave those until we have the path-based API in Swift (same decision was already made in the Java translations, I believe). - Vapor seemed like the most appropriate web framework to use for the JWT examples; from what I can tell it's still the dominant one (compared to, say, Hummingbird). Claude validated the Vapor code by running a server with this example code and checking that it generated a JWT that could be used to perform an Ably REST request.
634c292 to
e795036
Compare
Description
Outstanding work:
main(now that Python and Java examples have been merged)directionin AITuntilAttachhistory calls #3187Checklist