Nebula SDK the all in one sdk for using the LLM's provided at 0g Inference
This is how we built our agents-sdk over 0g.
We first built a chat and memory module on top of the 0G chat-completion, Compute and storage APIs, giving developers streaming chat along with hybrid memory (ephemeral + persistent).
After that, we raised pull requests to major LLM provider frameworks like OpenRouter, LlamaIndex, Vercel AI SDK, and LangChain, adding the two 0G-supported inference models so they appear directly in their docs and can be used by developers right away.
Next, we integrated already existing ElizaOS plugins and official MCP tools into the SDK so that it supports them out of the box.
Finally, we aggregated everything into one SDK/docs page so any developer can easily find, understand, and access it all in one place.

