The Linux Foundation Projects
Skip to main content

Model Openness Framework (MOF)

The Generative AI Commons at the LF AI & Data Foundation has designed and developed the Model Openness Framework (MoF), a comprehensive system for evaluating and classifying the completeness and openness of machine learning models. This framework assesses which components of the model development lifecycle are publicly released and under what licenses, ensuring an objective evaluation. The framework is constantly evolving. Please participate in the Generative AI Commons to provide feedback and suggestions.

Model Openness Tool (MoT)

To implement the MoF, we’ve created the Model Openness Tool (MOT). This tool evaluates each criterion from the MOF and generates a score based on how well each item is met. The MOT provides a practical, user-friendly way to apply the MOF framework to your model and produce a clear, self-service score.

How It Works

The MOT presents users with 16 questions about their model. Users need to provide detailed responses for each question. Based on these inputs, the tool calculates a score, classifying the model’s openness on a scale of 1, 2, or 3.

Why We Developed MOT

Our goal in developing the MOT was to offer a straightforward tool for evaluating machine learning models against the MOF framework. This tool helps users understand what components are included with each model and the licenses associated with those components, providing clarity on what can and cannot be done with the model and its parts.

Explore the Model Openness Framework and try the Model Openness Tool today to see how your models measure up.

Launch Model Openness Tool