Native LLM engine
Optimized pipelines for Android and iOS with predictable startup.
React Native Amaryllis
On-device multimodal AI for React Native
Amaryllis is the base inference module for local text, image, and streaming workflows. A companion module, working name amaryllis-components, extends that foundation into governed adaptive components.
npm install react-native-amaryllis
# or
yarn add react-native-amaryllis
# or
pnpm add react-native-amaryllis
Native inference, streaming hooks, multimodal requests, and context APIs provide the substrate for higher-level adaptive UI modules.
Optimized pipelines for Android and iOS with predictable startup.
Send prompts with images for grounded, visual responses.
Use hooks and observables to render partial tokens fast.
Centralize configuration for the entire React Native app.
Fine tune behavior with LoRA adapters on GPU devices.
Cancel and manage sessions with explicit APIs.
Keep performance, memory, and safety predictable.
Show partial tokens early for a responsive UX.
Always cancel async inference in cleanup handlers.
Limit image count and size to protect memory.
Prevent invalid or unsafe native file access.
Surface custom error types and graceful fallbacks.
Track model versions and update strategies.
Companion module RFC
The proposed companion module adds governed component specs, build-time generation workflows, and safe runtime personalization on top of the base Amaryllis inference layer.
Keep react-native-amaryllis focused on on-device multimodal inference. Put adaptive component specs, registries, validators, and personalization contracts in a companion package.
Build-time AI creates component source from a ComponentSpec, then the result goes through validation, review, and publishing.
AI creates bounded variants within declared slots, layouts, copy rules, and design-token constraints.
On-device AI returns schema-validated props, variants, slot text, or JSON patches. It does not emit executable component code.
A typed, versioned spec defines UI structure, behavior, allowed AI operations, target runtime, and policy requirements before code is generated.
Build-time TSX generation moves through validation, static checks, preview, diff review, and artifact publishing instead of direct freeform AI code edits.
Runtime AI can fill slots, choose variants, and select design tokens, but it must return validated data, not executable TSX, JSX, JavaScript, imports, or raw markup.
Import allowlists, accessibility checks, design token rules, network restrictions, provenance metadata, and approval gates keep generated output reviewable.
Wrap your app, then generate from hooks.
import { LLMProvider } from 'react-native-amaryllis';
<LLMProvider
config={{
modelPath: 'gemma3-1b-it-int4.task',
visionEncoderPath: 'mobilenet_v3_small.tflite',
visionAdapterPath: 'mobilenet_v3_small.tflite',
maxTopK: 32,
maxNumImages: 2,
maxTokens: 512,
}}
>
{/* App components */}
</LLMProvider>
import { useInferenceAsync } from 'react-native-amaryllis';
const generate = useInferenceAsync({
onResult: (chunk, isFinal) => {
// Update UI with tokens
},
onError: (err) => setError(err),
});
await generate({ prompt, images });
Sample UI built on top of Amaryllis streaming hooks.