Conversation
There was a problem hiding this comment.
Code Review
This pull request does a great job of updating the documentation to reflect the breaking changes in GenUI version 0.9. The renaming of components and updates to the API are consistently applied across the documents. However, several code examples in the 'get-started.md' file are incomplete, commented out, or contain confusing variable names. For a guide aimed at new users, it's crucial that these examples are clear and fully functional. I've provided specific suggestions to address these issues and improve the clarity and completeness of the documentation.
Note: Security Review has been skipped due to the limited scope of the PR.
| _connector.stream.listen((A2uiMessage message) { | ||
| // Pass messages to Adapter/Controller | ||
| }); |
There was a problem hiding this comment.
The code example is incomplete. The comment // Pass messages to Adapter/Controller is not helpful for a user trying to follow the guide. Since the _connector.stream emits A2uiMessage objects and _surfaceController.handleMessage accepts them, the implementation should directly connect them.
| _connector.stream.listen((A2uiMessage message) { | |
| // Pass messages to Adapter/Controller | |
| }); | |
| _connector.stream.listen(_surfaceController.handleMessage); |
There was a problem hiding this comment.
Yes, this is fine as an example.
| void dispose() { | ||
| _textController.dispose(); | ||
| _uiAgent.dispose(); | ||
| _a2uiMessageProcessor.dispose(); | ||
| _contentGenerator.dispose(); | ||
| _surfaceController.dispose(); | ||
| super.dispose(); | ||
| } |
There was a problem hiding this comment.
The _connector likely holds resources like a WebSocket connection that need to be released. It should be disposed of in the dispose method, similar to how _contentGenerator was handled in the previous version. Please add _connector.dispose(); (or a similar method like close()) to prevent resource leaks.
| void dispose() { | |
| _textController.dispose(); | |
| _uiAgent.dispose(); | |
| _a2uiMessageProcessor.dispose(); | |
| _contentGenerator.dispose(); | |
| _surfaceController.dispose(); | |
| super.dispose(); | |
| } | |
| void dispose() { | |
| _textController.dispose(); | |
| _uiAgent.dispose(); | |
| _surfaceController.dispose(); | |
| _connector.dispose(); | |
| super.dispose(); | |
| } |
There was a problem hiding this comment.
Yes, it should dispose.
| _genUiConversation = GenUiConversation( | ||
| a2uiMessageProcessor: _a2uiMessageProcessor, | ||
| contentGenerator: contentGenerator, | ||
| // ... initialize your LLM Client of choice using promptBuilder.systemPrompt |
There was a problem hiding this comment.
This comment makes the example incomplete and hard to follow. It would be much more helpful to show a concrete (even if simplified) example of how to initialize an LLM client and connect it to the rest of the GenUI components. For instance, you could continue the google_generative_ai example from earlier.
There was a problem hiding this comment.
No, this is vague on purpose.
| void _sendMessage(String text) { | ||
| if (text.trim().isEmpty) return; | ||
| // _conversation.sendMessage(text); | ||
| // Or pass chunk texts directly via LLM SDK stream into the transportAdapter | ||
| } |
There was a problem hiding this comment.
The example for sending a message is entirely commented out, which is not helpful for a user. Please provide a working, uncommented example of how to send a message using either _conversation.sendMessage(text) or by streaming to the transport adapter.
| void _sendMessage(String text) { | |
| if (text.trim().isEmpty) return; | |
| // _conversation.sendMessage(text); | |
| // Or pass chunk texts directly via LLM SDK stream into the transportAdapter | |
| } | |
| void _sendMessage(String text) { | |
| if (text.trim().isEmpty) return; | |
| _conversation.sendMessage(text); | |
| } |
There was a problem hiding this comment.
I think just comment in this line:
// _conversation.sendMessage(text);| final messageProcessor = SurfaceController(catalogs: catalogs); | ||
| final transportAdapter = A2uiTransportAdapter(); | ||
| transportAdapter.messageStream.listen(messageProcessor.handleMessage); | ||
|
|
||
| final promptBuilder = PromptBuilder.chat( | ||
| catalog: catalog, | ||
| systemInstruction: 'You are a helpful assistant.', | ||
| modelName: 'models/gemini-2.5-flash', | ||
| instructions: 'You are a helpful assistant.', | ||
| ); | ||
|
|
||
| final model = GenerativeModel( | ||
| model: 'gemini-2.5-flash', | ||
| apiKey: 'YOUR_API_KEY', // Or set GEMINI_API_KEY environment variable. | ||
| systemInstruction: Content.system(promptBuilder.systemPrompt), | ||
| ); | ||
|
|
||
| final conversation = GenUiConversation( | ||
| contentGenerator: contentGenerator, | ||
| a2uiMessageProcessor: messageProcessor, | ||
| final conversation = Conversation( | ||
| surfaceController: messageProcessor, | ||
| transportAdapter: transportAdapter, | ||
| ); |
There was a problem hiding this comment.
For better clarity and consistency with the new API, it would be best to rename the messageProcessor variable to surfaceController. The class is now SurfaceController, and using the same name for the variable makes the example easier to understand, especially for new users.
| final messageProcessor = SurfaceController(catalogs: catalogs); | |
| final transportAdapter = A2uiTransportAdapter(); | |
| transportAdapter.messageStream.listen(messageProcessor.handleMessage); | |
| final promptBuilder = PromptBuilder.chat( | |
| catalog: catalog, | |
| systemInstruction: 'You are a helpful assistant.', | |
| modelName: 'models/gemini-2.5-flash', | |
| instructions: 'You are a helpful assistant.', | |
| ); | |
| final model = GenerativeModel( | |
| model: 'gemini-2.5-flash', | |
| apiKey: 'YOUR_API_KEY', // Or set GEMINI_API_KEY environment variable. | |
| systemInstruction: Content.system(promptBuilder.systemPrompt), | |
| ); | |
| final conversation = GenUiConversation( | |
| contentGenerator: contentGenerator, | |
| a2uiMessageProcessor: messageProcessor, | |
| final conversation = Conversation( | |
| surfaceController: messageProcessor, | |
| transportAdapter: transportAdapter, | |
| ); | |
| final surfaceController = SurfaceController(catalogs: catalogs); | |
| final transportAdapter = A2uiTransportAdapter(); | |
| transportAdapter.messageStream.listen(surfaceController.handleMessage); | |
| final promptBuilder = PromptBuilder.chat( | |
| catalog: catalog, | |
| instructions: 'You are a helpful assistant.', | |
| ); | |
| final model = GenerativeModel( | |
| model: 'gemini-2.5-flash', | |
| apiKey: 'YOUR_API_KEY', // Or set GEMINI_API_KEY environment variable. | |
| systemInstruction: Content.system(promptBuilder.systemPrompt), | |
| ); | |
| final conversation = Conversation( | |
| surfaceController: surfaceController, | |
| transportAdapter: transportAdapter, | |
| ); |
| final responseText = await _connector.connectAndSend( | ||
| message, | ||
| clientCapabilities: A2uiClientCapabilities(supportedProtocols: ['a2ui/0.9.0']) | ||
| ); | ||
|
|
||
| // Handling response depends on your app's logic |
There was a problem hiding this comment.
The responseText variable is unused, and the comment // Handling response depends on your app's logic is vague for an example. The response from the agent should be handled, for instance, by displaying it to the user. This would make the example more complete and useful.
| final responseText = await _connector.connectAndSend( | |
| message, | |
| clientCapabilities: A2uiClientCapabilities(supportedProtocols: ['a2ui/0.9.0']) | |
| ); | |
| // Handling response depends on your app's logic | |
| final responseText = await _connector.connectAndSend( | |
| message, | |
| clientCapabilities: A2uiClientCapabilities(supportedProtocols: ['a2ui/0.9.0']) | |
| ); | |
| if (responseText.isNotEmpty) { | |
| setState(() { | |
| _messages.insert(0, ChatMessage.model([TextPart(responseText)])); | |
| }); | |
| } |
There was a problem hiding this comment.
I think we can ignore this for now.
| additionalTools: _a2uiMessageProcessor.getTools(), | ||
| ); | ||
|
|
||
| // Pass promptBuilder.systemPrompt to your LLM Config |
There was a problem hiding this comment.
There was a problem hiding this comment.
This is vague on purpose.
|
Visit the preview URL for this PR (updated for commit e92123a): https://flutter-docs-prod--pr13161-update-genui-docs-vtyga3he.web.app |
| [Enable the Gemini API]: https://firebase.google.com/docs/gemini-in-firebase/set-up-gemini | ||
| [Firebase's Flutter setup guide]: https://firebase.google.com/docs/flutter/setup | ||
| [`genui_firebase_ai`]: {{site.pub-pkg}}/genui_firebase_ai | ||
| [`firebase_vertexai`]: {{site.pub-pkg}}/firebase_vertexai |
There was a problem hiding this comment.
this package is discontinued
| to add Firebase to your app. | ||
|
|
||
| 4. Use `dart pub add` to add `genui` and [`genui_firebase_ai`][] as | ||
| 4. Use `dart pub add` to add `genui` and [`firebase_vertexai`][] as |
There was a problem hiding this comment.
this isn't the right package - I believe it should be firebase_ai_logic @sfshaza2
The GenUI package has been updating to 0.9 (from 0.7) and includes breaking changes. This PR updates those docs/examples.