Name: | Description: | Size: | Format: | |
---|---|---|---|---|
1.77 MB | Adobe PDF |
Authors
Advisor(s)
Abstract(s)
A gestão de implantações de projetos, em inglês deployments, é uma parte essencial em qual-
quer tipo de projeto, desde a gestão de recursos alocados ao projeto até à monitorização das
máquinas virtuais responsáveis por alojar estes mesmos projetos.
Depois das empresas passarem por uma era onde o deployment era feito diretamente em máqui-
nas físicas e onde era impossível de definir o limite de recursos alocados a projetos, causando
problemas de performance, começamos a ver as primeiras implantações em máquinas virtuais.
Com as máquinas virtuais, a divisão de recursos, dentro de uma mesma máquina, era mais fácil
de se fazer e o problema que a abordagem anterior apresentava, tinha sido resolvida.
Com a virtualização, acabou por surgir o conceito de contentores, em inglês containers, que são
muito similares a máquinas virtuais no entanto são muito mais leves, necessitam de muito me-
nos recursos visto que não necessitam dos recursos que uma emulação de um sistema operativo
necessita para funcionar. Num contentor também fica guardado tudo o que seja necessário para
executar qualquer tipo de software incluindo código, bibliotecas, configurações etc... Com este
conceito de containers, surgiram tecnologias como o docker que tornaram estes containers o
standard em empresas de informática com os mais variados projetos, pela sua facilidade em
manter a consistência das aplicações e de reproduzir os mais variados ambientes dos sistemas.
Com o docker é mais fácil de desenvolver, testar e fazer o deploy de aplicações assim como a
sua gestão e escalamento em ambientes produtivos.
Com o aparecimento e o ganho de popularidade dos containers, o orquestrador de containers
Kubernetes (K8s) aparece como uma solução para gerir ambientes complexos com vários con-
tainers. Com o K8s é possível automatizar, escalar e gerir aplicações dentro de containers
[Goo23d]. Neste trabalho, é feito um estudo das principais características dos sistemas de or-
questração de containers, com particular detalhe para o K8s. Fez-se uma instalação local de
K8s para criar um ambiente de testes e analisar o impacto do autoscaling no desempenho de
um conjunto de aplicações exemplo e também nos recursos do sistema.
Os resultados obtidos mostram que o autoscaling é uma ferramenta que deve ser parametrizada
tendo em conta o tipo de aplicação e que deve ser ajustada ao longo do tempo.
Project deployment management is an essential part of any type of project, from the manage- ment of resources allocated to the project to the monitoring of virtual machines responsible for hosting these same projects. After companies went through an era where deployment was done directly on machines and where it was impossible to define the limit of resources allocated to projects, causing perfor- mance issues, we began to see the first deployments in virtual machines. With virtual machines, the division of resources within the same machine was easier and the problem that the previous approach presented, had been solved. With virtualization, the concept of containers emerged, which are very similar to virtual ma- chines however they are much lighter, require much less resources since it do not need the resources that an emulation of an operating system needs to function. Containers also store everything that is necessary to run any kind of software including code, libraries, configurati- ons etc... With this concept of containers, technologies like Docker made these containers the standard in IT companies with the most varied projects, due to its ease in maintain application consistency and reproduce the most varied system environments. With docker it is easier to develop, test and deploy applications as well as its management and escalation in productive environments. With the appearance and popularity of containers, container orchestrators K8s appears as a solution to manage complex environments with multiple containers. With K8s it is possible to automate, scale and administer applications inside containers [Goo23d]. In this work, a study is conducted on the main characteristics of container orchestration systems, with particular detail on K8s. A local installation of K8s was performed to create a testing environment and analyze the impact of autoscaling on the performance of a set of example applications as well as on system resources. The results obtained demonstrate that autoscaling is a tool that should be parameterized considering the type of application and should be adjusted over time.
Project deployment management is an essential part of any type of project, from the manage- ment of resources allocated to the project to the monitoring of virtual machines responsible for hosting these same projects. After companies went through an era where deployment was done directly on machines and where it was impossible to define the limit of resources allocated to projects, causing perfor- mance issues, we began to see the first deployments in virtual machines. With virtual machines, the division of resources within the same machine was easier and the problem that the previous approach presented, had been solved. With virtualization, the concept of containers emerged, which are very similar to virtual ma- chines however they are much lighter, require much less resources since it do not need the resources that an emulation of an operating system needs to function. Containers also store everything that is necessary to run any kind of software including code, libraries, configurati- ons etc... With this concept of containers, technologies like Docker made these containers the standard in IT companies with the most varied projects, due to its ease in maintain application consistency and reproduce the most varied system environments. With docker it is easier to develop, test and deploy applications as well as its management and escalation in productive environments. With the appearance and popularity of containers, container orchestrators K8s appears as a solution to manage complex environments with multiple containers. With K8s it is possible to automate, scale and administer applications inside containers [Goo23d]. In this work, a study is conducted on the main characteristics of container orchestration systems, with particular detail on K8s. A local installation of K8s was performed to create a testing environment and analyze the impact of autoscaling on the performance of a set of example applications as well as on system resources. The results obtained demonstrate that autoscaling is a tool that should be parameterized considering the type of application and should be adjusted over time.
Description
Keywords
Deployment de Projetos Docker Escalonamento Automático Kubernetes Orquestração de Con-Tainers Virtualização