ArgoCD + Helm + Prometheus + Grafana + Slack Notification Setup

 

  1. Install argo

Create the argo namespace and aplly the manifest with all the yaml for the argo creation in k8s. Finally obtain the secret of the argo admin account

Kubectl create ns argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

Expose argo by port-forward

kubectl port-forward svc/argocd-server -n argocd 8080:443

2. Install prometheus

Create argo app named prometheus-helm-app-yaml

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prometheus
namespace: argocd
spec:
source:
path: prometheus
repoURL: https://github.com/javier2419/prometheus-helm.git
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: preus
project: default
Kubecetl create namespace preus
Kubectle apply -f prometheus-helm-app.yaml

2. Install Grafana

Create argo app named grafana-helm-app-yaml

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: grafana
namespace: argocd
spec:
source:
path: grafana
repoURL: https://github.com/javier2419/prometheus-helm.git
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: grafana
project: default
Kubecetl create namespace grafana
Kubectle apply -f grafana-helm-app.yaml
kubectl port-forward svc/grafana -n grafana 3001:3000

The password is in values.yaml

Now add data source prometheus in grafana

url = http://prometheus-server.preus.svc.cluster.local

click in save & test

3. Import dashboard kubernetes deployment metrics with GPU

4. ArgoCD Slack Notification Setup

4.1 Create Slack Application using https://api.slack.com/apps?new_app=1

4.2 Once application is created navigate to Enter OAuth & Permissions

4.3 Click Permissions under Add features and functionality section and add chat:write:bot scope. To use the optional username and icon overrides in the Slack notification service also add the chat:write.customize scope.

4.4 Scroll back to the top, click ‘Install App to Workspace’ button and confirm the installation.

4.5 Once installation is completed copy the OAuth token.

4.6 Create a Slack Channel, for example argo and ddd your bot to this channel otherwise it won’t work

4.7 Store token in argocd_notifications-secret Secret

apiVersion: v1
kind: Secret
metadata:
name: argocd-notifications-secret
namespace: argocd
stringData:
slack-token: "xoxb-xx-your secret"

The above file is called argocd-notifications-secret.yaml.

kubectl apply -f argocd-notifications-secret.yaml

Finally, use the OAuth token to configure the Slack integration

apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-notifications-cm
namespace: argocd
data:
service.slack: |
token: $slack-token # use as it is
defaultTriggers: |
- on-deployed
trigger.on-deployed: |
- description: Application is synced and healthy. Triggered once per commit.
oncePer: app.status.operationState.syncResult.revision
send:
- app-deployed
when: app.status.operationState.phase in ['Succeeded'] and app.status.health.status == 'Healthy' and app.status.sync.status == 'Synced'
template.app-deployed: |
message: |
{{if eq .serviceType "slack"}}:white_check_mark:{{end}} Application {{.app.metadata.name}} is now running new version of deployments manifests.
slack:
attachments: |
[{
"title": "{{ .app.metadata.name}}",
"title_link":"{{.context.argocdUrl}}/applications/{{.app.metadata.name}}",
"color": "#18be52",
"fields": [
{
"title": "Sync Status",
"value": "{{.app.status.sync.status}}",
"short": true
},
{
"title": "Repository",
"value": "{{.app.spec.source.repoURL}}",
"short": true
},
{
"title": "Revision",
"value": "{{.app.status.sync.revision}}",
"short": true
}
{{range $index, $c := .app.status.conditions}}
{{if not $index}},{{end}}
{{if $index}},{{end}}
{
"title": "{{$c.type}}",
"value": "{{$c.message}}",
"short": true
}
{{end}}
]
}]

The above file is called argocd-notifications-cm.yaml

kubectl apply -f argocd-notifications-cm.yaml

Create a Slack integration subscription:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
namespace: argocd
annotations:
notifications.argoproj.io/subscribe.on-deployed.slack: argo #Slack Channel name
spec:
source:
path: helm-guestbook
repoURL: https://github.com/argoproj/argocd-example-apps.git
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: kube-system
project: default

Testing

 what you will need is

AWS account 
virtual machine 
putty or any ssh client

##TERRAFORM INSTALLATION

sudo apt-get update && sudo apt-get install -y gnupg software-properties-common

wget -O- https://apt.releases.hashicorp.com/gpg | \
    gpg --dearmor | \
    sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg

gpg --no-default-keyring \
    --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg \
    --fingerprint

echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
    https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
    sudo tee /etc/apt/sources.list.d/hashicorp.list

sudo apt update

sudo apt-get install terraform

#AWS CLI

apt install unzip

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

sudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli --update

aws configure

git clone https://github.com/hashicorp/learn-terraform-provision-eks-cluster

cd learn-terraform-provision-eks-cluster

#comment cloud configuration in terrform.tf

terraform init

terraform apply

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

snap install kubectl --classic
kubectl version --client

aws eks --region $(terraform output -raw region) update-kubeconfig \
    --name $(terraform output -raw cluster_name)

kubectl cluster-info
kubectl get nodes

kubectl create namespace argocd

kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo

kubectl delete -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
kubectl delete namespace argocd
terraform destroy --auto-approve

Understand Frontend Architecture


What’s a Clean Frontend Architecture?

Each developer should have the same goal. The code base of each project should be easily extendable. Means, new feature should be easily added, bugs fixes should not create other bugs and each developer should exactly know how to achieve this. In other words, projects must be easily maintainable and features should be addable in a manageable amount of time.

There are some principles that help to achieve this goal.

SOLID, KISS (Keep it short and simple), DRY (Don’t repeat yourself), DDD (Domain-Driven-Design).

However, in my opinion the most important factor are architectural patterns and rules, domain- and technical ones, that comply with this patterns.

Tip 01: Define technical- and domain-rules

Maybe one has have heard the following statement.

We have started with Clean Code and a Clean Architecture. However, now we have something that cannot be as easily maintained as before.

Most of the time, teams are extended or developers are replaced by others that are not that familiar with the implemented architecture. So, developers tend to break architectural rules that are not explicitly, but implicitly defined.

The following diagram shows a sample architecture. There is an API that contains services and DTOs. A store with action and queries and the different domains/modules with components and utils.

Sample Architecture

As illustrated above there are some implicit rules defined.

  • Mapper: The use of frontend models instead of DTOs
  • Store: The use of a store to communicate with the Service-Layer (API)
  • Smart- and dumb: To separate the component logic into dumb- and smart-components
  • Domain: To classify components into domains/modules

So, what can go wrong with this architecture?

There are a lot of implicit rules, that are not explicitly defined. Therefore, developers may break those rules. The most obvious ones would be the following.

  1. Components directly communicate with the service-layer (API)
  2. Components use DTOs instead of provided frontend models
  3. Components include too much logic — no separation of dumb- and smart-components

However, there are also other ones, that are not that obvious.

  • The shared domain/module may include too much logic
  • Direct coupling between domains/modules. The components in the order-domain could be used within the offer-domain.

Means, often there are also domain-rules defined that prevent direct couplings between domains.

How can we solve this issue?

Module boundariesStrategic Design (a method from Domain-Driven-Design) and Clean Code principles help us to solve the problems above.

The following diagram shows an example architecture for separating domains. There is a feature-, ui-, domain-, and util-layer. However, as in any layered architecture, layers beneath cannot access the ones from above.

Tip 02: Use module boundaries

The proposed architecture above can be implemented by the use of Nx [2and the definition of module boundaries (@nrwl/nx/enforce-module-boundaries).

With Nx the project is divided into several libraries. Each layer will be represented by a library and within those libraries tags can be defined. This helps to clarify the technical- and the domain-type of each library. The code snippets beneath shows the project.json of such libraries.

"name": "order-feature",
"$schema": "../../../node_modules/nx/schemas/project-schema.json",
"sourceRoot": "libs/order/feature-order/src",
"prefix": "lib",
"tags": ["domain:order", "type:feature"],
"projectType": "library",
...
"name": "shared-ui-common",
"$schema": "../../../node_modules/nx/schemas/project-schema.json",
"sourceRoot": "libs/shared/ui-common/src",
"prefix": "lib",
"tags": ["domain:shared", "type:ui"],
"projectType": "library",
"name": "order-domain",
"$schema": "../../../node_modules/nx/schemas/project-schema.json",
"sourceRoot": "libs/order/domain-order/src",
"prefix": "lib",
"tags": ["domain:order", "type:domain-logic"],
"projectType": "library",

With those tags — defined within each library — module boundaries, that enforce technical- and domain-rules, can be defined. Those rules can be declared in the eslintrc.js and could look like the following:

'@nrwl/nx/enforce-module-boundaries': [
'error',
{
enforceBuildableLibDependency: true,
allow: [],
depConstraints: [
{
"sourceTag": "type:app",
"onlyDependOnLibsWithTags": [
"type:api",
"type:feature",
"type:ui",
"type:domain-logic",
"type:util"
]
},
{
"sourceTag": "type:feature",
"onlyDependOnLibsWithTags": [
"type:ui",
"type:domain-logic",
"type:util",
"type:api"
]
},
{
"sourceTag": "type:ui",
"onlyDependOnLibsWithTags": [
"type:domain-logic",
"type:util",
"type:ui"
]
},
{
"sourceTag": "type:api",
"onlyDependOnLibsWithTags": [
"type:ui",
"type:domain-logic",
"type:util",
"type:api"
]
},
{
"sourceTag": "type:domain-logic",
"onlyDependOnLibsWithTags": [
"type:util",
"type:domain-logic",
"type:api"
]
},
{
"sourceTag": "domain:offer",
"onlyDependOnLibsWithTags": [
"domain:shared",
"domain:offer"
]
},
{
"sourceTag": "domain:order",
"onlyDependOnLibsWithTags": [
"domain:shared",
"domain:offer"
]
},
{
"sourceTag": "domain:shared",
"onlyDependOnLibsWithTags": ["domain:shared"]
}
],
},
],

Tip 03: Simplify the shared domain

The example above is a good start for a Clean Architecture. However, there are still some problems. Both, feature and ui are allowed to depend on the domain, which may violate the smart- and dumb-component separation, utils is not allowed to depend on domain and too much logic may end up in the shared domain.

In this section I would like to address the problem with the shared domain and how this can be solved. One simple trick would be to either remove the shared domain or some layers in this domain.

In Domain-Driven-Design there is a concept named shared kernel, which includes all shared-models and -logicHowever, personally, I am not a big fan of this concept. Real world domains do not contain a shared-domain.

Architecture without a shared domain

So, I would suggest to move shared components to a separate repository and import these components in each domain. In addition, shared utils can be moved into each domain. In the end, there is no shared domain anymore.

Architecture with a simplified shared domain

However, if one might not want to create a separate repository for shared components, the architecture could also look like the following. Only shared dumb-components and utils are stored within the shared domain.

Tip 04: Choose “code duplication” over hard couplings between domains

DRY (Don’t repeat yourself) and KISS (keep it as simple as possible) are principles that should not be applied all the time. Within domains it definitely makes sense to stick to those principles. However, outside the domains I would like to keep the coupling between domains as low as possible. Means, that I am willing to duplicate code.

In my personal experience, developers tend to unify everything, even things that don’t actually belong together, which results in hard couplings between modules that often leads to code-bases that are not that easy to maintain anymore.

So, what should be shared between domains?

Let’s have a look at smart-components. The requirements/business-logic of smart-components may differ between domains. Therefore, I would not share such components. Moreover, the same applies to models. The properties of a product in the order-domain may differ from those in the offer-domain.

Thus, personally, I would only shared utils and dumb-components. However, keep in mind that things between domains may seem similar at first, but requirements change over time and so does the implementation within domains.

Think of a shared component. If there is a change-request that only addresses the order-domain, but not the offer-domain, one would not want to add a *ngIf to check the domain, but rather duplicate the component to separate the logic.

Why are hard couplings that bad?

The problem with hard couplings between domains/modules is that they negatively influence the maintainability of any project. Why? Because every change that is made may affect multiple domains/modules. This can be not that a big problem at first, but over time changes made on a code-base may lead to unwanted side-effects.

Therefore, I would always choose to duplicate components and models instead of introducing another hard coupling between domains/modules.

Tip 05: Use separate API-slices

In the architecture proposed above there is the concept of APIs to share components, services and models between domains. However, often those components, services and models are not stored within a separate slice, but are just exported from the core code-base.

What’s the problem with this approach?

Think of a Offer-API that provides a REST-Service-API and some components. Now let’s image that there are multiple teams that import this API. If there is no separate API-slice for each team, then there is a hard coupling between those teams and the Offer-API. Means, that Team 1 could request a feature that Team 2 doesn’t need. This will not only lead to a high management communication overhead, but also to a high frustration within the teams. In addition, the team that provides the Offer-API will be the bottle-neck for the other teams, which is not a situation someone desires to find themselves in.

Offer-API dependencies

Therefore I would suggest to either add separate slices for each team or that each team duplicates the code necessary and stores it in their own code-base.

Tip 06: Define eslint rules

It’s crucial for code to have a clear structure and defined rules. eslint can help establish these regulations.

Therefore, I would strongly suggest to add eslint-rules to any typescript- based code. The following code snippet of a eslintrc.js shows some example rules.

'@typescript-eslint/no-unsafe-member-access': 'warn',
'@typescript-eslint/no-unsafe-assignment': 'warn',
'@typescript-eslint/no-explicit-any': 'warn',
'@typescript-eslint/no-floating-promises': 'warn',
'@typescript-eslint/explicit-module-boundary-types': 'warn',
'@typescript-eslint/ban-ts-comment': 'warn',
'@typescript-eslint/unbound-method': 'warn',
'@angular-eslint/no-empty-lifecycle-method': 'warn',
'@angular-eslint/no-forward-ref': 'warn',
'@angular-eslint/no-input-rename': 'warn',
'@angular-eslint/no-output-native': 'warn',
'@angular-eslint/no-output-rename': 'warn',
'@angular-eslint/prefer-on-push-component-change-detection': 'warn',
'@angular-eslint/prefer-output-readonly': 'warn',
'@angular-eslint/relative-url-prefix': 'warn',
'@angular-eslint/use-component-selector': 'warn',
'@angular-eslint/use-component-view-encapsulation': 'warn',
'@angular-eslint/use-injectable-provided-in': 'off',
'@angular-eslint/use-lifecycle-interface': 'warn',
'@typescript-eslint/member-ordering': 'warn',
'@typescript-eslint/no-empty-function': 'warn',
'@typescript-eslint/no-unnecessary-type-assertion': 'warn',
'@typescript-eslint/no-unsafe-argument': 'warn',
'@typescript-eslint/no-unsafe-call': 'warn',
'@typescript-eslint/no-unsafe-return': 'warn',
'rxjs/no-implicit-any-catch': 'warn',
'rxjs/no-nested-subscribe': 'warn',
'rxjs/throw-error': 'warn',
'rxjs/no-unsafe-subject-next': 'warn'

However, keep in mind that defining eslint-rules is not enough. Those rules should be integrated into your CI/CD pipeline. And with integrated I mean that if there is an eslint-error the build pipeline will be fail.

Tip 07: Take code reviews seriously

I have talked a lot about couplings between domains/modules and defining technical- and domain-rules. However, in the end all that matters is that every developer has a basic understanding about the architectural patterns and clean code principles and that those principles and patterns are applied to the code-base. In other words, there is no perfect architecture that prevents code- and architectural-smells. Tools like eslint and module boundaries help developers to evaluate code changes better, but ultimately developers have to take code reviews very seriously to ensure that the code-base stays maintainable.

Summing Up

I have given some tips on how to achieve a Clean Frontend Architecture. The first and foremost one is defining technical- and domain rules.

Then, I have suggested to avoid hard couplings between domains/modules and to not take the DRY (don’t repeat yourself) that seriously. Within a domain it’s good to stick to this principle, but outside of the domain I would always choose code duplication over hard couplings.

Moreover, I have given an introduction into module boundaries and how this can be implemented. And I have talked about the problems of a shared domain and APIs.

In the last sections I have discussed the power of eslint rules and the importance of code reviews on the maintainability of any project.

The OCR Service to extract the Text Data

Optical character recognition, or OCR, is a key tool for people who want to build or collect text data. OCR uses machine learning to extract...