Skip to content

Batch API with gemini-3-flash-preview returns error 13 for video input (works with gemini-2.5-flash) #1890

@seanbell

Description

@seanbell

Environment details

  • Programming language: Python 3.12
  • OS: macOS
  • Language runtime version: Python 3.12.4
  • Package version: google-genai 1.56.0

Steps to reproduce

  1. Upload a video to the Files API
  2. Submit a batch job with gemini-3-flash-preview referencing the video
  3. Wait for job completion
  4. Observe error 13 in the response

The same video works with gemini-2.5-flash in batch mode, and also works with gemini-3-flash-preview in sync mode.

Minimal reproduction script:

#!/usr/bin/env python3
"""
Minimal reproduction: Gemini 3 Flash batch API fails with video (error 13)
while Gemini 2.5 Flash works fine with the same video.

Requirements:
    pip install google-genai yt-dlp

Usage:
    export GEMINI_API_KEY=your_key_here
    python repro.py
"""

import os
import subprocess
import tempfile
import time

from google import genai
from google.genai import types


def main():
    api_key = os.environ.get('GEMINI_API_KEY')
    if not api_key:
        print("Set GEMINI_API_KEY environment variable")
        return

    client = genai.Client(api_key=api_key)

    # Download a public YouTube video
    print("Downloading test video from YouTube...")
    tmp_dir = tempfile.mkdtemp()
    yt_path = f"{tmp_dir}/test.mp4"
    subprocess.run([
        'yt-dlp', '-f', 'worst[ext=mp4]',
        '-o', yt_path,
        'https://www.youtube.com/watch?v=dQw4w9WgXcQ'
    ], capture_output=True, check=True)

    # Encode to 5 seconds
    print("Encoding to 5 seconds...")
    encoded_path = f"{tmp_dir}/encoded.mp4"
    subprocess.run([
        'ffmpeg', '-y', '-i', yt_path, '-t', '5',
        '-c:v', 'libx264', '-preset', 'fast', '-crf', '23',
        '-r', '24', '-vf', 'scale=480:-2', '-an',
        '-movflags', '+faststart', encoded_path
    ], capture_output=True, check=True)
    print(f"Encoded video: {os.path.getsize(encoded_path) // 1024} KB")

    # Upload video
    print("Uploading to Files API...")
    uploaded = client.files.upload(
        file=encoded_path,
        config=types.UploadFileConfig(display_name='test_video', mime_type='video/mp4')
    )
    video_uri = uploaded.uri
    print(f"Uploaded: {video_uri}")

    # Cleanup
    os.unlink(yt_path)
    os.unlink(encoded_path)
    os.rmdir(tmp_dir)

    # Simple request with system instruction and JSON schema
    request = {
        'contents': [{
            'parts': [
                {'file_data': {'file_uri': video_uri, 'mime_type': 'video/mp4'}},
                {'text': 'Describe what you see briefly.'}
            ],
            'role': 'user'
        }],
        'config': {
            'system_instruction': {
                'parts': [{'text': 'You are helpful. Be concise.'}]
            },
            'response_mime_type': 'application/json',
            'response_schema': {
                'type': 'object',
                'properties': {
                    'description': {'type': 'string'},
                },
                'required': ['description']
            },
        },
    }

    # Test both models
    models = ['gemini-2.5-flash', 'gemini-3-flash-preview']
    jobs = []

    print("\nSubmitting batch jobs...")
    for model in models:
        job = client.batches.create(
            model=model,
            src=[request],
            config={'display_name': f'test_{model}'}
        )
        jobs.append((model, job.name))
        print(f"  {model}: {job.name}")

    # Poll until complete
    print("\nPolling for results...")
    completed_states = {'JOB_STATE_SUCCEEDED', 'JOB_STATE_FAILED', 'JOB_STATE_CANCELLED'}
    pending = set(range(len(jobs)))

    while pending:
        time.sleep(30)
        for i in list(pending):
            model, job_name = jobs[i]
            job = client.batches.get(name=job_name)
            state = job.state.name if hasattr(job.state, 'name') else str(job.state)

            if state in completed_states:
                pending.discard(i)
                if hasattr(job.dest, 'inlined_responses') and job.dest.inlined_responses:
                    resp = list(job.dest.inlined_responses)[0]
                    if hasattr(resp, 'error') and resp.error:
                        print(f"  {model}: ERROR {resp.error.code} - {resp.error.message}")
                    elif resp.response and resp.response.text:
                        print(f"  {model}: SUCCESS")
            else:
                print(f"  {model}: {state}")


if __name__ == '__main__':
    main()

Expected behavior:

Both models should successfully process the video and return a JSON response.

Actual behavior:

  • gemini-3-flash-preview reports error 13.
  • gemini-2.5-flash succeeds.
Downloading test video from YouTube...
Encoding to 5 seconds...
Encoded video: 165 KB
Uploading to Files API...
Uploaded: https://generativelanguage.googleapis.com/v1beta/files/[redacted]

Submitting batch jobs...
  gemini-2.5-flash: batches/[redacted]
  gemini-3-flash-preview: batches/[redacted]

Polling for results...
  gemini-2.5-flash: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-2.5-flash: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-2.5-flash: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-2.5-flash: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-2.5-flash: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-2.5-flash: SUCCESS
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_PENDING
  gemini-3-flash-preview: JOB_STATE_RUNNING
  gemini-3-flash-preview: JOB_STATE_RUNNING
  gemini-3-flash-preview: ERROR 13 - Internal error encountered.

Additional notes:

  • Tested with multiple video encodings (baseline h264, various resolutions/framerates)
  • Tested with and without system_instruction
  • Tested with and without JSON schema
  • Error occurs consistently (not intermittent)
  • Job completes with JOB_STATE_SUCCEEDED but response contains error 13
  • gemini-3-pro-preview has the same issue

Metadata

Metadata

Assignees

Labels

priority: p2Moderately-important priority. Fix may not be included in next release.status:awaiting user responsetype: bugError or flaw in code with unintended results or allowing sub-optimal usage patterns.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions