How to Run Flux Schnell on Mac Locally

๐Ÿ’กWant to try out FLUX online without additional hassle? Try it out now at Anakin AI!๐Ÿ‘‡๐Ÿ‘‡๐Ÿ‘‡FLUX.1 Pro Online | AnakinBetter than Midjourney and Stable Diffusion, Try the Open Source, State-of-the-art image generation Tool: FLUX Pro Online!Anakin.aiFLUX.1-schnell | Free AI tool | AnakinFlux Schnell is a fast, open-source text-to-image

1000+ Pre-built AI Apps for Any Use Case

How to Run Flux Schnell on Mac Locally

Start for free
Contents
๐Ÿ’ก
Want to try out FLUX online without additional hassle?

Try it out now at Anakin AI!๐Ÿ‘‡๐Ÿ‘‡๐Ÿ‘‡
FLUX.1 Pro Online | Anakin
Better than Midjourney and Stable Diffusion, Try the Open Source, State-of-the-art image generation Tool: FLUX Pro Online!
FLUX.1-schnell | Free AI tool | Anakin
Flux Schnell is a fast, open-source text-to-image model designed for efficient image generation, offering high-quality outputs with minimal computational resources.
FLUX.1-dev | Free AI tool | Anakin
Flux Dev is an open-weight, guidance-distilled text-to-image model that offers high-quality image generation for non-commercial applications, balancing performance and efficiency.

Flux Schnell is a powerful image generation model that can create high-quality images from text prompts. This article will guide you through the process of setting up and running Flux Schnell locally on a MacBook Pro with an M3 Max chip. We'll cover the requirements, installation steps, and provide a step-by-step tutorial to generate your first image.

Requirements to Flux Schnell on Mac Locally

Before we begin, ensure your system meets the following requirements:

  • MacBook Pro with M3 Max chip
  • At least 40 GB of available RAM
  • macOS Sonoma or later
  • Xcode Command Line Tools installed
  • Homebrew package manager

Local Flux Schnell Installation Steps

Let's break down the installation process into manageable steps:

Step 1: Install Miniconda

First, we'll install Miniconda, which will help us manage our Python environment:

  1. Open Terminal
  2. Run the following command to download the Miniconda installer:
curl -O https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh
  1. Install Miniconda:
sh Miniconda3-latest-MacOSX-arm64.sh
  1. Follow the prompts to complete the installation
  2. Close and reopen Terminal to apply changes

Step 2: Create and Activate Conda Environment

Now, let's create a dedicated environment for Flux:

conda create -n flux python=3.11 -y
conda activate flux

Step 3: Install PyTorch

Install PyTorch with MPS (Metal Performance Shaders) support:

pip install torch==2.3.1

Step 4: Install Diffusers and Dependencies

Install the required libraries:

pip install git+https://github.com/huggingface/diffusers.git
pip install transformers==4.43.3 sentencepiece==0.2.0 accelerate==0.33.0 protobuf==5.27.3

Let's Flux Schnell Locally Now!

Now that we have everything installed, let's walk through the process of running Flux Schnell and generating an image.

Step 1: Prepare the Script

Create a new Python file named flux_generate.py and add the following code:

import torch
from diffusers import FluxPipeline
import diffusers

# Modify the rope function to handle MPS device
_flux_rope = diffusers.models.transformers.transformer_flux.rope
def new_flux_rope(pos: torch.Tensor, dim: int, theta: int) -> torch.Tensor:
    assert dim % 2 == 0, "The dimension must be even."
    if pos.device.type == "mps":
        return _flux_rope(pos.to("cpu"), dim, theta).to(device=pos.device)
    else:
        return _flux_rope(pos, dim, theta)

diffusers.models.transformers.transformer_flux.rope = new_flux_rope

# Load the Flux Schnell model
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", revision='refs/pr/1', torch_dtype=torch.bfloat16).to("mps")

# Set the prompt for image generation
prompt = "A cat holding a sign that says hello world"

# Generate the image
out = pipe(
    prompt=prompt,
    guidance_scale=0.,
    height=1024,
    width=1024,
    num_inference_steps=4,
    max_sequence_length=256,
).images[0]

# Save the generated image
out.save("flux_image.png")

Step 2: Run the Script

In Terminal, navigate to the directory containing flux_generate.py and run:

python flux_generate.py

This process should take approximately 30 seconds, utilizing up to 40 GB of RAM on your MacBook Pro M3 Max.

Understanding the Code

Let's break down the key components of the script:

Importing Libraries: We import the necessary modules from PyTorch and Diffusers.

Modifying the ROPE Function: The new_flux_rope function is a workaround to handle the MPS device (Apple's Metal Performance Shaders). It ensures compatibility with the M3 Max chip.

Loading the Model: We use FluxPipeline.from_pretrained() to load the Flux Schnell model, specifying the model repository and revision.

Setting the Prompt: We define the text prompt that describes the image we want to generate.

Generating the Image: The pipe() function is called with various parameters:

  • prompt: The text description of the image
  • guidance_scale: Set to 0 for this example
  • height and width: Image dimensions (1024x1024 pixels)
  • num_inference_steps: Number of denoising steps (4 in this case)
  • max_sequence_length: Maximum length of the input sequence (256)

Saving the Image: The generated image is saved as "flux_image.png" in the same directory.

Optimizing Performance for Local Flux Schnell

To get the best performance from Flux Schnell on your MacBook Pro M3 Max:

  1. Close unnecessary applications to free up RAM and CPU resources.
  2. Ensure good ventilation for your MacBook to prevent thermal throttling.
  3. Experiment with num_inference_steps: Increasing this value can improve image quality but will also increase generation time.
  4. Adjust max_sequence_length: Longer sequences allow for more detailed prompts but require more memory and processing time.

Troubleshooting for Local Flux Schnell

If you encounter issues:

  1. CUDA errors: Ensure you're using the MPS backend (to("mps")) instead of CUDA, as the M3 Max doesn't support CUDA.
  2. Memory errors: Try reducing the image size or max_sequence_length if you're running out of memory.
  3. Module not found errors: Double-check that all required packages are installed in your conda environment.

Now that you have Flux Schnell running locally, here are some ideas to explore:

  1. Experiment with prompts: Try different text descriptions to see how the model interprets them.
  2. Adjust parameters: Play with guidance_scale, num_inference_steps, and image dimensions to see their effects on the output.
  3. Batch processing: Modify the script to generate multiple images from a list of prompts.
  4. Integration: Consider integrating Flux Schnell into your own applications or workflows.

Conclusion

Running Flux Schnell locally on your MacBook Pro M3 Max opens up a world of creative possibilities. With its impressive speed and quality, you can generate stunning images right on your own machine. As you become more familiar with the model and its parameters, you'll be able to fine-tune your results and push the boundaries of AI-generated art.

Remember to respect copyright and ethical considerations when using AI-generated images, and always give credit to the model and its creators when sharing your results.