August 04, 2018

Azure Certification - Preparation - 04/08/2018

Microservices | Azure Service Fabric | Serverless | Containers | Kubernetes | OpenStack

https://www.dynatrace.com/news/blog/azure-services-explained-part-1-azure-service-fabric/ 
To understand the essence of Service Fabric, first we must talk about the evolution from monolithic apps to microservices-based apps.
Before microservices were a thing, most applications were built in a “monolithic” way. In a monolithic architecture, the app’s functionalities are tightly coupled into one service. This approach can (and usually does) mean long downtimes when deploying new features, which results in fewer opportunities to make updates.
Also, there are downsides when it comes to scaling. Because every component scales at the same rate regardless of use, scaling monolithic apps to meet demand is slow and expensive. Availability and reliability are achieved by hardware redundancy, which means additional cost and complexity.
This new computing model is almost everywhere defined as a model which “allows you to build and run applications and services without thinking about servers.” If this definition makes you wonder why this is different from PaaS, you’ve got a point. But there is a difference.
With PaaS, you might write a Node app, check it into Git, deploy it to a Web Site/Application, and then you’ve got an endpoint. You might scale it up (get more CPU/Memory/Disk) or out (have 1, 2, n instances of the Web App), but it’s not seamless. It’s great, but you’re always aware of the servers.
With serverless systems like AWS LambdaAzure Functions, or Google Container Engine, you really only have to upload your code and it’s running seconds later.
While a hypervisor uses the entire device, containers just abstract the operating system kernel. This means, containers don’t require a direct access to the physical hardware. By doing so, they allow for much lower resource consumption and much better cost effectiveness – one of the major differences between containers and VMs.
Kubernetes, aka K8s, is an open-source cluster manager software for deploying, running and managing Docker containers at scale. It lets developers focus on their applications, and not worry about the underlying infrastructure that delivers them. And the beauty of it: Kubernetes can run on a multitude of cloud providers, such as AWS, GCE and Azure, on top of the Apache Mesos framework and even locally on Vagrant (VirtualBox).
  • Tools providing health checks of a Kubernetes cluster’s individual components
  • Tools providing end-to-end checks of a Kubernetes cluster’s functionality
  • Tools providing full monitoring insights into the hosts and applications you deploy with Kubernetes.

August 03, 2018

Azure Certification - Preparation - 03/08/2018

Storage Account Keys (2)










App Types and App Service Plans

 

 

Page Blobs vs Block Blobs
  • Block blobs are for your discrete storage objects like jpg's, log files, etc. that you'd typically view as a file in your local OS. Max. size 200GB 4.77TB. Regular (non-Premium) storage only.
  • Page blobs are for random read/write storage, such as VHD's (in fact, page blobs are what's used for Azure Virtual Machine disks). Max. size 8TB. Supported by both regular and Premium Storage.

SonarQube with Jenkins Setup using Docker Images

https://funnelgarden.com/sonarqube-jenkins-docker/  https://medium.com/@hakdogan/an-end-to-end-tutorial-to-continuous-integration-and-con...