Vocabulary infrastructure for Alexa+ and the AI Action SDK
The Scenario
“Alexa, teach me a new word.” Alexa+, powered by generative and agentic AI via Amazon Bedrock, chains multiple API calls in a single conversation. The Alexa AI Action SDK calls Word Orb for a verified definition, streams pronunciation audio to the Echo speaker, delivers a structured lesson, and runs a comprehension quiz — a complete voice learning experience in one multi-step agentic flow.
Step 1 — Word Orb looks up the word
One API call returns a verified definition, translations, pronunciation audio, and etymology.
Loading word data…
Step 2 — Lesson Orb delivers a structured lesson
A 5-phase lesson (hook → story → wonder → action → wisdom) with the explorer teaching archetype.
Loading lesson data…
Step 3 — Quiz Orb assesses comprehension
Assessment questions aligned to the lesson content through the knowledge graph. Your agent tests what it taught.
Loading quiz data…
Step 4 — The Knowledge Graph connects everything
30,288 connections link words to lessons to assessments. Every quiz question tests what the lesson taught.
Loading knowledge graph…
Why this matters for Amazon Alexa
Alexa AI Action SDK integration — one Action that chains Word Orb → Lesson Orb → Quiz Orb for a complete voice learning experience
Alexa+’s Multi-Agent SDK enables a vocabulary agent that coordinates with other Alexa services — OpenTable, Uber, Ticketmaster are already integrated
Amazon Bedrock’s model-agnostic architecture means the underlying LLM may change — Orb’s deterministic content stays constant across Nova and Anthropic models
240,000 pronunciation audio files stream directly to Echo devices via URL — sub-5ms edge response for real-time voice interaction