Built by Metorial, the integration platform for agentic AI.

Learn More

Tools

Run Model

Run synchronous inference on any Fal.ai model endpoint with arbitrary input parameters. This is a generic tool for calling any model that doesn't have a dedicated tool, including 3D generation, image editing, upscaling, and other specialized models. Pass model-specific parameters directly and receive the raw model output.

Upload File

Upload a file to Fal.ai's CDN storage from a URL. The uploaded file can then be referenced by its CDN URL in model inference requests. Useful for providing input images, audio, or video files to Fal.ai models.

Search Models

Search and discover available model endpoints on Fal.ai. Supports listing all models, searching by text query or category, and looking up specific models by endpoint ID. Returns model metadata including name, description, category, and status.

Transcribe Audio

Transcribe audio files to text using Fal.ai speech recognition models like Whisper (Wizper). Supports speaker diarization, language detection, and word/segment-level timestamps. Provide an audio URL and receive a full transcription with optional metadata.

Generate Image

Generate images from text prompts or transform existing images using Fal.ai models such as FLUX, Stable Diffusion, Ideogram, Recraft, and more. Supports text-to-image and image-to-image generation with configurable parameters like image size, guidance scale, LoRA adapters, and safety settings. Runs synchronously and returns generated image URLs.

Generate Speech

Generate speech audio from text using Fal.ai text-to-speech models. Supports multiple languages, custom pronunciation, and voice cloning via reference audio. Returns a URL to the generated audio file.

Generate Video

Generate videos from text prompts, images, or other videos using Fal.ai models such as Kling, LTX, Veo, Sora, and more. Supports text-to-video, image-to-video, and video-to-video transformations. Runs synchronously and returns the generated video URL.

Submit Queue Request

Submit an asynchronous inference request to Fal.ai's queue system. This is the recommended approach for long-running model inference. Returns a request ID for polling status or retrieving results later. Optionally provide a webhook URL to receive results automatically upon completion.

Check Queue Status

Check the status of an asynchronous queue request on Fal.ai and optionally retrieve the result. Use after submitting a request with the Submit Queue Request tool. If the request is completed, the result will be included in the response. Also supports canceling queued requests.