Resources |
◎ COVESA Events |
Join/Sign Up |
◎ Join COVESA |
2026-02-02 GenAI WG Working Session
Date: February 2, 2026
Time: 7:00 AM Pacific / 10:00 AM Eastern / 4:00 PM CET
Join Zoom Meeting:
https://us06web.zoom.us/j/89994460240?pwd=DRNueiBANbsdgYqpndnS0h2gNBLs86.1&jst=2
Passcode: 118634
Session Type: Working Session
This is a hands-on working session, not a group readout. We'll be actively working through setup and configuration together.
Agenda
Playground Definition & Architecture
GitHub Project Setup
Q&A
Session Details
1. Playground Definition & Architecture
Objective: Define what our "AI Playground" means and how it supports working group experimentation.
Discussion points:
What is the playground?
Environment for experimenting with AI agents and MCP servers
Sandbox for testing Gen AI tooling integrations
Learning environment for working group members
Architecture considerations:
Local vs. cloud-hosted environments
MCP server configurations (internal vs. external)
IT security and governance requirements (from Jan 26 discussion)
Authentication and access control patterns
Design requirements to define:
What tooling do we need? (GitHub Copilot, VS Code, etc.)
What sample data/projects will we use?
How do we handle multi-tenant/multi-org setups?
2. GitHub Project Setup
Objective: Walk through and set up the COVESA/genai-virtual-car repository structure.
Work items:
Repository structure and organization
CI/CD workflow setup (GitHub Actions)
Development environment configuration
devcontainer setup
Required extensions and tools
MCP server configurations
Contribution guidelines
Issue templates and project boards
Reference from last session:
3. Q&A
Open floor for:
Technical questions on setup
Clarifications on playground scope
Ideas and suggestions
Blockers or concerns
Preparation / Pre-work
To get the most out of this working session, please have ready:
Access to the COVESA GitHub organization
VS Code with GitHub Copilot installed (if available)
Review the genai-virtual-car repository (private preview)
Working Group Technical Summary
The following technical topics were identified and discussed during the working session:
Terminology and Nomenclature Definition
Development of a shared nomenclature document to clearly define key agentic AI system concepts used by the working group. The goal is to create a simple nomenclature map that links agreed terminology (e.g. orchestration, agent workflow) to concrete examples of tools, frameworks, and implementation patterns. For example, LangGraph-based agent workflows versus CI/CD orchestration using GitHub Actions, share the concept of ‘orchestration’.
By defining these simple terms we define a common language that prevents miss-understanding/communication, and provides clarity for design patterns/choices.
Caution: Do not spend too much effort in this goal, instead can be a simple ‘running’ document to expand as we progress in the project.Data Sources and Artifacts for AI-Assisted Engineering
Identify representative data formats and tools to be used in playground environment, including but not limited to:Sphinx-Needs–based requirements documentation
IBM DOORS artifacts
PlantUML models and diagrams
SysML v2.0
MCP Server Interaction Patterns and Internal Architecture
Discussion of interaction patterns and architectural design considerations for MCP servers, including:LLM ↔ MCP Client ↔ MCP Server tool execution pattern
Typical registered tooling
Lazy MCP
Sandboxed MCP tool execution as-code
Context management strategies for LLM interactions (session-level, shared, or versioned state)
Inter-server coordination patterns, where multiple MCP servers are deployed
isolation, and security considerations (might not be in scope)
MCP Server Input/Output Tool Schema Definitions
Definition of MCP server tool interfaces, specifically input/output schema choices (md vs .json etc…).Evaluation of Agentic System Patterns
Exploration of methods to evaluate and compare the performance and effectiveness of different MCP interaction patterns, and agentic orchestration workflows, independent of the specific LLM.
Follow Ups:
Ragip Selcuk presented experience on building agentic workflows in automotive space, specifically PM3.1. Determinism of ‘agentic AI workflows’ is difficult to achieve.
Action Item: Follow up in next bi-weekly working group meeting and/or offline to compose key pain points of a deterministic review agent system.
Upcoming Sessions
Date | Type | Focus |
|---|---|---|
Feb 2, 2026 | Working Session | Today - Playground & GitHub Setup |
Feb 9, 2026 | Working Session | TBD |
Feb 16, 2026 | Working Session | TBD |
Feb 23, 2026 | WG Meeting | Monthly readout and progress report |
References
Minutes
Meeting Attendees:
Yansong Chen, Stephen Jiggins, Georg Doll, Bogdan Racotea, Clay Nelson, Holger Dormann, Mike Nunnery, Nastaran Saberi, Oliver Kral, Ragip Selcuk
Session Notes:
systems requirement, for orchestration, what tool can be used. for architecture flow
Agent orchestration tool
Q from Bogdan - are we trying to create MCP servie how to take inofrmation from doors to document the in the agent orchestration. Bogdan - we know how to do it but not have MCP server to actually take information from Doors polation dontineu flow to orchestration of agents.
Github leverage tool rather than having LLM running try to read every indivudial. use playground use open source solution to scale. commercial version from useblocks does this. Bogdan - from Bogan side open source workflow, an agentic workflow
Stephen - put together a direct se. graph, a DAG workflow of each node bying key execution, rather what we can drfine MCP tool into LMwith Free MCP server, script tool inside of them, LM vased agents have access to tool can aggregate teh info. all nee is refereing to how you ant to execute , having MCP server calling each other.
two way - two MCP serves claling each other pass down to an LLM. keyh akeway. for orchestration, two MCP servers decide what most viable inforamtion pass on to the LLM is the best. this decision making to the LLM .
how this flow of servers and flow to LLMhow to execute, is it in Github code space or vs code, right now lcoal agent in github copolit VS code.
spec kit from github design multi-layers. issue si multiple features, start making changes to the previous rep, not being tracked. no version tracking.
GitHub - github/spec-kit: 💫 Toolkit to help you get started with Spec-Driven Development
in project, set up agent, dowload release in th release, an init script for pwoershell or for bash, run script, copy agent, the setup agen to Github.
Ragip - catch up, 15+yrs in auto, handson development for LLM for agent workflow, explain experience have done, and shaqre issues. if have time discuss thoughts. not easy to create determinstic systems, need guidelines, what has tried for example automotive space PM3.1, for the specifidcation for Managmt part3, to makea workflow , at teh end also make soem review agents,
Steve - take away how to orchestrate, set up Langraph, start interacting with an agent sstarts piping into playground, ultimatdely run some code. start feining what mean by orchestration, what we mean by workflows by agents LLM, MCP server, interms prgmatic , agree MC server to be a tool definitation, extract infoprovdes to some agent, 2nd layer MCP server interaction, agent never determinsistric, need to keep track. not care what LLM does, it is weather it reaches the LLM in the first place. in what form does it.
Ragip share experiecne - issue is LLM is decision makign, hard to make determinstitic. for project magmt system. only focus on output of the PM plan documentation, output of the polarion yss, finding of cloacking point- define well, requriements, - stephen recommend to set up offline discussion, Bogdan - where to set up LLM, not ever step in workflow needs LLM,. some cases script suffices. Bogdan has simple example to share.