Architecture overview

There are several different components in the AI4OS stack that are relevant for the users. Later on you will see how each different type of user can take advantage of the different components.

The Dashboard

The AI4OS dashboard. allow users to access computing resources to deploy, perform inference and train AI modules. The Dashboard simplifies the deployment and hides some of the technical parts that most users do not need to worry about.

The AI modules

The AI modules are developed both by the platform and by users. For creating modules, we provide the AI4OS Modules Template as a starting point.

In addition to AI modules, the Dashboard also allows to deploy tools (eg. a Federated Server).

The DEEPaaS API

The DEEPaaS API is a key component for making the modules accessible to everybody (including non-experts), as it provides a consistent and easy to use way to access the model’s functionality. It is available for both inference and training.

Advanced users that want to create new modules can make them compatible with the API to make them available to the whole community. This can be easily done, since it only requires minor changes in user’s code.

The data storage resources

Storage is essential for users that want to create new services by training modules on their custom data. For the moment we support hosting data in AI4OS Nextcloud instance (up to 2 Terabytes by default), as well as integration with popular cloud storage options like Google Drive, Dropbox, Amazon S3 and many more.

The Inference Platform (OSCAR)

The Inference platform (OSCAR) is a fully managed service to facilitate users to deploy pre-trained AI models with horizontal scalability thanks to a serverless approach.

User can also compose those models in complex AI workflow