Browsing by Author "Marques, Pedro Joel da Silva Miroto"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
- ML Orchestrator: Development and Optimization of Machine Learning Pipelines and PlatformsPublication . Marques, Pedro Joel da Silva Miroto; Neves, João Carlos Raposo; Lopes, Vasco Ferrinho; Degardin, Bruno ManuelMachine Learning Pipelines play a crucial role in the efficient development of large-scale models, which is a complex process that involves several stages and faces intrinsic challenges. This document seeks to explore the depth of these structures, from the initial preparation of datasets to the final stage of model implementation, as well as the importance of optimizing these structures. Emphasis is also placed on the critical relevance of this process in Cloud Computing environments, where flexibility, scalability and efficiency are imperative. By understanding and properly applying optimized strategies, we not only improve model performance, but also maximize the benefits offered by cloud computing, thus shaping the future of Machine Learning development at scale. The Google Cloud Platform, more specifically the Vertex AI tool, offers a comprehensive solution for building and implementing Machine Learning Pipelines, as it allows development teams to take advantage of pre-trained models, automation of tasks and management of tasks and resources in a simplified way, leading to improved scalability, enabling efficient processing of large volumes of data. In addition, an analysis is made of how the Google Kubernetes Engine tool plays a key role in the management and scaling of these structures, since the ability to manage containers on a large scale guarantees an efficient execution of Machine Learning processes, providing a dynamic response to requests from clients. To efficiently build and optimize a ML pipeline, essential objectives were set to ensure robustness and efficiency. This includes creating a Google Kubernetes Cluster with its complementary services in GKE for the Playground Tool service, employing scalability strategies like KEDA and deploying the DeepNeuronicML model for objects and actions predictions from real-time video streams. Additionally, a Copilot is used to monitor computational resources, ensuring the ML pipeline can manage multiple clients and their AI models in an optimized and scalable manner. To conclude, it’s important to note that optimizing Machine Learning Pipelines in cloud environments is not just a necessity, but a strategic advantage. By adopting innovative approaches and integrating the tools mentioned above (Vertex AI and Google Kubernetes Engine), business organizations can overcome the complex challenges of these structures and boost efficiency and innovation in their Machine Learning services.