
Developer ToolchainEnd-to-End CIM AI Model Development Toolchain
From model import, quantization, compilation optimization to hardware simulation and deployment, FFI-SDK provides a complete development experience to maximize AI model performance on FFI8805 series hardware.
FFI-SDK covers every stage from AI model development to deployment, providing industry-leading compilers, profilers, and simulators.
Compile ONNX/TFLite/PyTorch models to FFI8805 native instruction set with automatic graph optimization, memory scheduling, and operator fusion.
Layer-by-layer, operator-level profiling including latency, memory bandwidth, and power metrics with bottleneck identification.
Cycle-accurate hardware simulator with > 98% correlation to actual silicon, enabling model validation without physical hardware.
INT4/INT8/FP16 mixed-precision quantization with automatic strategy search, < 0.5% accuracy loss while maximizing inference speed.
Command-line interface for batch compilation, automated testing, and script integration, ideal for CI/CD pipelines and large-scale deployment.
GitHub Actions / GitLab CI templates with model version management, automated regression testing, and deployment pipelines.
FFI-SDK simplifies the complex CIM compilation process into five intuitive steps from model import to hardware deployment.
ONNX / TFLite / PyTorch / PaddlePaddle
INT4/INT8/FP16 mixed-precision auto-search
Graph optimization + memory scheduling + op fusion
Cycle-accurate simulation & perf estimation
One-click deploy to FFI8805 target hardware
FFI-SDK natively supports major AI frameworks with no manual model format conversion required.
Simulate the complete FFI-SDK compilation flow from model import to deployment, click each step for details
Detailed specifications and use cases require login to access.
Contact our technical team for FFI-SDK trial license and support.