Earlier this year, I wrote about my year in review for 2022, 2022 - Blood, Sweat and Tears; many folks reached out to me and shared some of the things they passed through and are currently going through. It would be insane not to talk about how I got back up after all my failures last year.
At the beginning of the year, I was broke and had nothing going for me except my company, which I was on the verge of shutting down. After being sick and having nothing, I told my co-founder I could no longer run the company, but he wasn’t having it. There is something about him that really inspires me, and that is that he is relentless. He paid most of the company salaries while I tried to find my feet. It was about to go down. I thought to myself: It was either I left Dubai for Nigeria (where I’m originally from) to start life afresh, or I wait to see what would happen to me, possibly eviction.
I have two friends who just got married and were willing to allow me to live with them till I get back up on my feet in Nigeria. Still, after everything I had accomplished, I asked myself how I could return to Nigeria with nothing to show. How do I go back and start life again? I was so full of regrets and sadness. I only wept when I was alone but always raised my head high. If you are very ambitious, you will understand how difficult it is to accept defeat. I did not even have a plane ticket to go anywhere. One of those days, while I was conversing with a close friend. He always loves to use one line saying I dey for you which means I got your back I was explaining how life was treating me and how I might have to start life afresh. I was saying my rent is due, and I cannot pay it anymore. On the call, he said, “Check your wise account”. I was in shock; $1000 had been sent. Good Lord, I lost myself only for a few moments and almost fell from my work seat in tears to thank him.
It made me realize how much friends matter, especially those who genuinely love you. I didn’t ask for it, but he knew I needed help. Wiz showed up for me. I could add a little money and pay for my January rent. Okay! The biggest hassle is sorted. February was the same shit, and my close friend, Brain, showed up for me again. He gave me money for rent, $1000, and I survived again. March came again, and a friend who was a huge inspiration in the DevOps world gave me another $1000. Sorted again.
I began to ask myself, shouldn’t I pay for tickets, return to Nigeria, and start again? I was $3000 in debt. One of the things that played a massive role for me was I lived in a residential area close to Indians, so food was cheap; I tried cooking but ended up with cooking rubbish 🥲, so I stuck with all kinds of cereal and my favourite go-to food was Rice with butter chicken I was in an abusive relationship with that meal and Poratha sandwich (which was another Indian food I liked so much). It was cheap and filled me up, mainly when I ate it with oatmeal. Lmfao! Phew, OG meal!
Where was I? Oh yeah! Even my mom, was supporting me. She sent me money one day, and it might not have been a lot, but I got some food to eat that day. I was on compulsory fasting because I had no money; if I remember correctly, I had $3 in my name in the bank.
In April, I got a Kubernetes contract and damn, it felt good; I did not understand how I was the person who could have an average of $7000/month to get super excited seeing $500-$700 a month. Again, I returned to my co-founder and told him I could not continue this anymore. I thought it would be crazy to ask him for money, considering he was doing a lot already, but he did send me some money again, and I paid my rent in April. Mind you, I had been contemplating moving in with my friends. In fact, when I told him, he wanted me to join them. It might be inconvenient, but I didn’t mind sleeping in the living room. Usually, when I visit, we play tennis, and I go on a joy ride with his scooter. This would have been fun, but I had to think of something; there was no way I would want to inconvenience another person.
During this time, I kept building. My CTO/Co-founder was triggered that I wrote almost 5-6 new products in 2 months. He was like “when will we ship all these?” I was on steroids, LMAO! I don’t know, but we will figure it out and cook. However, the worst happened in May. The rent was increased, and I was told I could not renew. I was panicking; there was no way this was happening to me at this time. I am finished, I said to myself. This may be the end for real. Maybe I should take my Ls (losses) and go back home and start again. I had 2 weeks to move out, and this was peak season in Dubai. Things were increasing, and man, there was no way I could have done this. I was panicking.
Luckily, a client that gave me the Kubernetes contract, which I did, recommended me for another company in YC, which gave me an offer I could not refuse. Even though I signed that contract in June, I was so happy. I could not believe I survived for 6 months with debts. My CTO/Co-founder gave me more money to sort my rent out, and none of these folks who supported me gave me an ETA for when to return the money. I moved into another apartment and sorted my rent for 2 months. I could not believe it; I could afford better meals and go out. I am sure my neighbours did not think someone lived in my apartment; I was either coding, sleeping or upskilling.
After my contract ended, I was about to get another job as a Lead Engineer in Dubai, but I had never been in an office to work; my job was always remote, and due to that, I asked them “Would you give me a stipend or extra salary to come to the office for the 3 days a week?” One of my friends said that was the most Gen-Z thing to say, but I mean, I was not about to use my salary to go nowhere. Finding out employees use their actual salaries to go to the office was new to me. I always thought they had a stipend for coming to the office or something. Anyway, I got another offer as a Staff Engineer after my birthday, and this was the special gift I needed.
I have been living in my new apartment, and I’m happy I took that risk (this is not an advice). Did I have any accomplishments? Definitely!, but this year in review is not for that. While I am grateful for it, it is about my friends. Even though I have refunded my debts to them, I want to take this time to appreciate them. I could never have gone through this year without them. Finding people who genuinely care about you and go through life with them is essential. I hope you take that risk; I hope you scream a little; I hope you cry a little; I hope you stand back up. I hope you rise from the ashes.
I want to leave you with something that stuck with me this year, and I read it all over again, even in doubt.
I know of no better life purpose than to perish in attempting the great and impossible. The fact that something seems impossible shouldn’t be a reason to not pursue it. That’s exactly what makes it worth pursuing; where would the courage and greatness be if success was certain and there was no risk. The only true failure is shrinking away from life’s challenges - Friedrich Nietzsche.
The real treasures are the friends we made along the way. Thanks for reading!
]]>NB: The names used in this article are aliases and not their real names for privacy reasons.
Before working at Deimos Cloud, I was fascinated with writing code; I didn’t think about performance, load testing, unit integration tests, benchmarking, etc. I just wanted to code, code, and code. On the other hand, looking for a job was incredibly hard. I was writing TypeScript back then and got introductory calls for internships at companies like Uber and the rest. But that was it, introductions and nothing more. I soon realized I needed to do something to make my resume stand out as a twenty-one (21) years old software engineer.
I went ahead to build different open-source source projects. I was hell-bent on making sure that I released new open-source projects every month. I had to bring my work rate to its peak.
I started with Stacks—an interactive CLI that helps developers install different framework stacks.
🚨I made a tool called *stacks*. Stacks is an interactive CLI that helps you install your stack(MEAN, LAMP, LEMP, MEMP, DOCKER, etc.) quickly. You don't have to copy and paste commands from tons of blog posts anymore. 😌
— Obinna Odirionye (@odirionyeo) August 21, 2019
Here is a link: https://t.co/VbtfgkengM
RTs are appreciated! pic.twitter.com/Iip719lA0H
Stacks went ahead to trend on GitHub. This was an incredible moment for me. I could never have believed it if I had been told.
I want to sincerely thank y'all for the retweets and likes. I am completely mind-blown by this🤯. this tweet and tool blew beyond my expectation 🥺 and it is currently trending on @github today 😱. I don't have a Soundcloud account. I'm grateful. 💜💜💜https://t.co/sXwrmJ2tay https://t.co/UNKdg9VvWy pic.twitter.com/5FpYWfzdDs
— Obinna Odirionye (@odirionyeo) August 23, 2019
I then got super motivated and inspired and built another similar tool. It was called DevOps-pack
🚨I made a tool called *DevOps-pack*. It is an interactive CLI with an All-in-one starter pack that helps you provision your machine with your favorite tools. K8, Go, AWS, Nodejs, Terraform, Ansible, DotNetCore, etc.
— Obinna Odirionye (@odirionyeo) August 27, 2019
Here is a link: https://t.co/0Mq9C5wRfr
RTs are appreciated! pic.twitter.com/xA9ev4ejYt
At this point, I just wanted to keep releasing. It was like I could write an entire operating system. LMAO! Such adrenaline.
Soon, I was faced with the issue: what do I build next? How do I make something people will use? I asked myself. I stumbled across Prosper Otemuyiwa’s laravel-hackathon-starter, and I was like, yes! We should have this for Nodejs/TypeScript. I later stumbled upon a TypeScript hackathon starter, but I needed more to put me off developing the idea. As I explored the codebase, I didn’t particularly like how it was mapped out, from routing to third-party to authentication and others (right now, I only remember a little of it). I knew I could not build something like that in a month, and I had to study and read more to make it. While on this journey, I considered myself a Tooling Engineer. I don’t know if that was a thing, but I liked building tools and so liked the title.
In my search for more folks, I came across Sarah Drasner. I legitimately screamed when I went through her GitHub repos. Boy! Lots of tools! Her work and projects inspired me to keep building. I eventually released the Node.js Typescript Hackathon starter.
🚨 I made a template called *Hackathon Starter Kit*. A Node.js app with Signup, Login(Local, Github, Facebook, Twitter, Google, etc.), Realtime monitoring, CRUD, Mailing + PWA support and more. 🔥
— Obinna Odirionye (@odirionyeo) October 14, 2019
Demo: https://t.co/ogifRpT7Dh
Github: https://t.co/p7lq63WpFx
RTs are appreciated! pic.twitter.com/a20ovphDGO
It blew beyond what I expected, and I updated my resume with these projects. My cover letter looked fascinating, and I was now getting more interviews. I could not believe it; I finally landed a job at Demois Cloud.
After submitting more than 80 applications from June till October, and so many rejections 😭, hoping for a YES! I finally got a YES 🥳. I finally got a job as a DevOps engineer @DeimosCloud 🔥
— Obinna Odirionye (@odirionyeo) November 4, 2019
I hope this inspires someone out there not to give up and thank you @Raznerd 💚
One thing I can tell you is that it was just the beginning. On my team was Bakyboy (I alone call him that name) and Idowu, my colleagues, now my amazing friends. These folks were the best! I always ran to them if there were any issues. I remember deleting a staging cluster for a customer around 02:00 AM. I woke up Idowu, and I was sweating profusely at night. I was new on the block; in fact, I felt I would get fired. Well, I wasn’t.
Bakare was one of the engineers that fascinated me, and he was brilliant at many things. I did not understand how it was possible. He could write good software; he was very vast about many things in engineering. We were age mates, and I thought I had seen it all till I saw him writing a OCI Runtime in C++. He invited me and showed me the code. I looked at him like, “Who are you? I struggled with an MVC app, and boys are out here writing OCI runtimes. I knew I didn’t know as much as I needed to then. I asked him how and why he was this good. He said it was about reading and studying. Bakare reads a lot of engineering blogs and tutorials. Following the advice, all I had to do was to study and experiment. I went to different engineering blogs and sites. Before I went to bed, I must read one article related to engineering, and this helped me tremendously over the months; I found myself fixing issues I read somewhere. I found the root cause of the problems because I understood how the systems work. There was a time I wanted to know how interpreters work and decided to build a programming language on my native language (Igbo). I never actually finished it, but it was super cool and a great learning experience.
It is vital to understand the concept of how things work. Running an application is not enough; understand how the apps work, core runtime, and how tasks are scheduled. Those nitty-gritty things many overlook are how you solve problems better. One of many things you can ask me in my sleep is what happens in Kubernetes when you do kubectl apply -f apply.yaml
. These things helped me solve issues and view systems differently.
I have always been reading and still read a technical article each day before I sleep to date. I think one movie I started watching like two months ago that related to my way of life now as an engineer is The Good Doctor.
I want to leave you with some advice that Sarah Drasner gave me early in my career regarding how hard you should work. Do you need to have sleepless nights or code 18 hours a day?
I asked her when I was 21 years old.
Hello, Sarah, Good morning, Please I wanted to ask something, do you believe in work-life balance, I mean John Resig didn’t just work 40hrs/week to build jquery or Linus Torvalds for Linux, they probably spent a lot of hours and sacrificed their sleep. Does it mean if I want to be better, I probably have to pay the ultimate price. Also, why do a lot of great people tell us we need to sleep, enjoy life while if we dig deeper, we’d probably know that these great people worked extremely hard and made a lot of sacrifices along the way. I am really confused here ma’am
She said:
Yeah and your confusion is fair. I think it’s about balance. Neither are true- you shouldn’t code 100% of the time, and you need sleep. But putting the work in will yield better results.
You can work hard AND hang out with your friends and find downtime
Also you might find you start enjoying it so much you don’t mind it, and it doesn’t feel like paying a price.
I love coding and will turn down social activities for it sometimes. I’m sure that’s how Resig felt too.
That it was engrossing and fun and worth his time.
I have used this as a mantra in becoming an engineer. Engineering should be fun. Engineering should feel like something other than work. It should be something you enjoy and can be cultivated. It’s okay if it is not your passion, and you should do something else.
Thank you so much for reading, and I hope this was helpful to you one way or the other.
Thanks to Bọ́lájí Ayọ̀dejì for proofreading the draft of this post
]]>Around June 2022, I earned $200K+ and had over $50K in the bank. I always wanted to build products that people would love, but because I had a 9-5, I just felt like it was hindering my dreams. So, like every super hyped person, I went ahead and quit all my jobs and hired terrific folks that would help build Clouddley—the next-generation cloud platform built by developers for developers, indie hackers, and startups.
Well, life never went as planned. I was happy writing code and assisting my co-founder, but it seemed like I wasn’t seeing the result of what we were building. We had bugs that stayed longer than two weeks, and yet I was merging PRs. “Why the hell have I been merging PR, yet bugs I have constantly complained about are still in the code base?” I asked myself. After thinking about it, I knew I had to change how we ship features. I read this book used at basecamp called Shapeup, and OMG! I got a better idea. I went back to the board and deleted backlogs and all distracting tasks. We started working on what we really needed at that moment in time.
This is what our project board looks like now. We ship what matters.
Gradually, my cash was burning, and what I had in the bank kept decreasing. Yet I had increasing personal and startup expenses to make—rent, hosting, salaries, feeding, etc. All these and I had no job.
As an immigrant living in Dubai, UAE, one of the critical issues I face is bank account creation and money transfer. Again, super hyped, I hired another set of folks to work on a product that solves that problem. Lmfao, I legit thought I was going to be alright. I assumed foolishly that once I released it, it was going to work. Investors will give us money, and we will start building the Venmo of the Middle East. Again, Lmfao. I tried so hard to raise money, but nothing worked. After a month, I shut it down and focused on Clouddley.
Fast-forward to November 2022, all I had left in the bank was $7K. Heavens! Seven freaking US Dollars. Now that’s from $50K+, as earlier mentioned. A drastic financial fall. A fall from Grace. Damn!
Well, I continued building. We did a pre-launch, and I was pleased; I will not lie. Solid engineering. I made sales, wrote docs, wrote code, oversaw the team, paid salaries, etc. No life. I did not step outside my apartment for about seven weeks. You heard me right; seven freaking weeks. Little did I know that hell was about to let loose.
I reached out to my heroes and got on a call with them. I was so happy because I wanted to raise a pre-seed from them. I sent my pitch deck and other things, but I am still waiting to hear from most of them. I am not a person that processes information emotionally. I like to think about things logically and based on facts, but this was different. I went to bed weeping at night. I had 4K USD left in the bank, and some folks even wanted 25% of the company. I have never been in a relationship that made me weep so hard. Startups messed with my emotions. Good Lord!
Also, don’t meet your heroes; they are humans just like you and might fail you or not live up to your expectation. And that includes me! Don’t hate the player; hate the game.
Then, the worst happened. I fell sick, and I underestimated it again. I got some medication and thought I’d be better; no, no, It was the beginning. I went from running a temperature on the first day to tongue sores and a throat infection the next day. My lower and upper and lower lips started bleeding. We have 24/7 medical support in Dubai, so I booked one, and they came and gave me an IV and stuff and told me I would be okay. At this point, I could not eat, drink or swallow anything. I shoved food directly into my throat because I could not let it touch my tongue. It hurt!
After the doctors left, I assumed I would be okay. I went to the toilet to spit early in the morning, and I spat out blood; where was this coming from? I don’t know, but I called the doctors again, and I was rushed to the hospital. I was in the hospital for days eating cereal-like food. I had no motivation. I felt like dying because everything I have worked for is down the drain. I sent a message to my CTO to shut down the Clouddley too.
For the first time, I felt like a complete failure. Looking at me fighting for my life, Maybe this was the end for me. I deactivated my Twitter and disconnected everything. I felt so useless, to be honest! I started looking at other things from tech. “Maybe I should join the military,” I said to myself. I downloaded the UAE Military form hoping to submit it. I was taught how to create TikTok Videos. Lmfao. It was fun!
I never told my parents what I was going through. But damn, my friends reached out to me, calling me, supporting me, and visiting me. Freaking rockstars!
“The real treasures are the friends we made along the way.”
I got better, watched some movies, dusted my resume, and started applying for Senior DevOps/SRE roles (I got offers but was not happy with the pay). In summary, I have learned a lot from all these experiences and will keep learning.
As for Clouddley, my co-founder and I will continue to work on it in our spare time. I have no talent, and the only thing that made me what I am today is Code, and I will keep writing it (I’m also building another product winks)
Congratulations, you have achieved failure. Failure has been achieved! Thank God. Now, the only place to go from failure is to win. You have to achieve failure. You have to take it that far. Nobody wants to go that far. It’s too scary but you know something? I got news for you. That’s where winning is… Nothing has changed since the 70’s - Tom Platz.
I am neither afraid of anything anymore nor to be alone. What could be more devastating than this? The only thing I feared more than dying was failing, and I failed.
Your dreams are valid! Go and build the future you want.
NB: You can check out the Spotify playlist I made in my dark moments. I really hope it helps you and/or motivates you!
]]>I want to tell you a story. The backstory of Clouddley, why it was built, what we have built, the journey, and my failures along the way. As a teenager, I dreamed of owning a data center which originated from my love for the mysteries of “The Cloud,”; which in my opinion, is the heart of the internet. Think about every software you have ever used and how it improved your life in one way or the other. When I think about the role cloud computing plays in the existence of this software, I love the Cloud even more. It is the heart of the internet ❤️.
In this article, I want to expose you to the world of cloud platforms and how my teenage desires led me to build a tool that solve issues cloud engineers, developers, and founders face regularly.
While working at Deimos, I was intrigued by how cloud platforms have become very complicated. When you want to create an application or deploy a machine on Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP); there are a lot of things you need to configure, like the:
If we dive in a little deeper, we will begin to talk about:
Some existing tools are used to automate these configuration processes, but they require a learning curve. These topics also involve some level of experience to grasp or hire what we call DevOps Engineers. Let’s not even talk about how expensive running an AWS application could be. We have seen folks rack up $1000+ on AWS bills for running three(3) applications or poorly configured services. You can have $100K credit on AWS and still deploy the wrong configuration and services.
I have racked up to $200 for forgetting to delete some EC2 machines while I was a student. Unfortunately, I could not pay for it, so AWS shut the account down. Another scenario that made me have sleepless nights was how a developer racked up a $4000 AWS bill for DDOS overnight. You can read more about it here.
And the list goes on and on. The issue here is that a lot of folks don’t know how cloud platforms work, how to configure them effectively, how to manage them, and how to get the best developer experience from their prefered cloud platform.
But I did something. I attempted to solve these issues. I re-imagined cloud platforms, and I worked hard with the best team ever to build the new kid in the block.
I was talking to my best friend about the idea and how we can fix the issues mentioned in the previous section above. One of the ideas was to work at our favorite companies and build this out. I am a huge fan of Microsoft, HashiCorp, Cloudflare, Stripe, Vercel, and Netlify.
Like all passionate engineers, I applied to all of them. Hashicorp and Stripe don’t hire from Nigeria. I never got any feedback from Microsoft, and I was so sad. I was always getting rejected by Netlify and Vercel. I was an early user of Netlify, loved it, and wanted to join the team, but I never got in! I wish I knew why I was rejected; it would have helped prepare me better :(. I have never been so intrigued by engineering made dead simple the way Vercel and Netlify did. Awesome people!
I already knew if I was going to build something fast and reliable, I had to use something built for speed. Vercel and Netlify were built in Go and Rust (I’m still determining this, I only checked their career page to establish this). I was familiar with Node.js already but didn’t think that would be cut out for the core systems engineering we wanted to build.
I started learning Golang even while at the University. Thanks to my best buddy Bakare, who helped me with debugging and still helps me now. Eventually, Deimos took a chance and hired me as a 21-year-old undergrad. I got the opportunity to work with outstanding engineers, work on solid projects, and master my Golang and other technical skills so well.
Fast-forward to today, I have become an experienced engineer working on Kubernetes Tooling, Infrastructure, and Distributed Systems. I’m also now a co-maintainer and member of CNCF Helm with over 380M downloads. Do let me know if you want to contribute :).
As a part of my quest to build out my ideas, I quit my job and started working with Phirmware to build Clouddley full-time. We hired two engineers to assist us in building the server, handlers, services, etc. We started looking at what made AWS expensive, why apps are becoming harder to ship, the barrier of entry, etc., and I kid you not, It was hard. This was not a case of an MVP you could build in a month, and there was no API we could call to abstract some parts. I overestimated the difficulty, and it was a challenging process.
During the process of building Clouddley, I started writing a reverse proxy server for serverless applications that we use at Clouddley. I called it Veronica. Veronica helps us distribute traffic to different regions on AWS. It also uses Cloudflare API, Redis Cache, Google Cloud Global Load Balancer, etc. It helps cache static assets, offers CDN capabilities, mitigates DDOS attacks, and more. I will share more about how these work and the system design in a different article someday.
But we did it! After months of working, cooking, and building. We birthed Clouddley—a platform that allows developers and startup companies to deploy their applications to any cloud provider from our dashboard in seconds. We have shipped for the AWS cloud platform and are ready to onboard customers. We are looking forward to building for Google Cloud, Azure, etc., soon. We even use Clouddley internally as a team to deploy some of our microservices on AWS, which costs us about $5/month. Take a look! Wild right?
On Clouddley, All you have to do is connect us to your cloud provider, connect your GitHub account, configure your app, and deploy your app. Dead simple!
A demo showing Nodejs/Express app deployed on AWS(us-east-2) via the Clouddley Dashboard.
Link to the deployed app: https://helloapp-7orgtdxa8a8b-man.clouddley.app/
You don’t have to worry about VPC, Endpoints, IAM role, Policies or which type of virtual machines to use, load balancing, etc. Clouddley manages that for you while being cost-effective. So you can focus on your code and shipping to your users while we do the heavy lifting.
We have finished most parts for AWS. Here is what we have shipped:
Engineering is getting harder, and my team is constantly working on shipping more tooling. I’m currently working on a distributed backend service for custom SSL certificates. When we issue an SSL certificate to a user for a specific domain, a certificate and PEM files are created. We need to be able to store and read/write this data(io). This data needs to be encrypted at rest, and we need to be able to rotate the encryption key. You can read more about this topic. The certificate must be read at runtime and subsequent requests whenever the domain name is called. These are the kind of toolings we’re building, and we will continue to work hard to ship all the features we need to provide the best experience for our incoming users.
We are looking to onboard customers this week and will be going live (pre-launch). We are still shipping, but the necessary work is done. If you are an AWS customer or planning to migrate to AWS, please send me a DM on Twitter or visit my about page to reach me.
I cannot wait to see the exciting things you will build on Clouddley, and I look forward to the bright times ahead. It’s just DAY ONE!
]]>After two years, Microsoft Azure has finally released Azure Container apps. Azure Container apps allow you to deploy containers the serverless way at scale. Now, you don’t have to worry about the underlying infrastructure and you don’t even need to know Kubernetes. It supports any programming language and framework, autoscales on demand, and even more powerful features.
If you can write a Dockerfile, then you can use the Azure container apps. In this article, we are going to deploy a Nodejs app to Azure Container apps from GitHub. So let’s get right into it!
Install and run express application generator following the steps below:
Run the command below to install it
mkdir container-svc && cd container-svc && npx express-generator
Install the npm dependencies
npm install
Start the application
npm start
You should see an output that looks similar to this:
localhost:3000
. You should see the output look like this:Well done. You are awesome! Let’s keep going!
Create a Public repository to push your Node.js code to GitHub following the steps below:
Copy and paste this command below:
echo "# Nodejs-azure-container-app" >> README.mdgit initecho "node_modules" > .gitignoregit add .git commit -m "first commit"git branch -M maingit remote add origin https://github.com/nerdeveloper/nodejs-azure-container-app.git # add your repo URL heregit push -u origin main
The image below shows the structure git push
Nodejs app on GitHub.
Let’s create a Dockerfile for our Node.js app and Push it to GitHub following the steps below:
Create a Dockerfile in the root of the folder.
touch Dockerfile
Copy and paste the following commands to the Dockerfile
.
FROM node:16-alpineWORKDIR /usr/src/appCOPY package*.json ./RUN npm installCOPY . .EXPOSE 3000CMD ["npm", "start"]
Run this command to exclude node_modules
in the .dockerignore
when building the docker image.
echo "node_modules" > .dockerignore
Push the updated code back to GitHub.
git add .git commit -am "add docker components."git push origin main
Build the image and Push it to a Docker Registry following the steps below:
Log into your docker Registry.
docker login
This command shows a prompt for a username and password. If you don’t have a docker account, kindly create one here
Let’s build the image
docker build -t nerdeveloper/nodejs-containerapp . # Change nerdeveloper to your docker username
Finally, we will push the image to our docker registry
docker push nerdeveloper/nodejs-containerapp # change nerdeveloper to your docker username
If you are not familiar with docker or creating Dockerfile(s). Please check out this article.
Create a Resource Group on Azure by running the command below. The command will create a group in North Europe.
az group create -l northeurope -n nodejs-container-app
You will see the output below
{ "id": "/subscriptions/786180ec-7ebe-4ada-8cd4-4201fd5a426c/resourceGroups/nodejs-container-app", "location": "northeurope", "managedBy": null, "name": "nodejs-container-app", "properties": { "provisioningState": "Succeeded" }, "tags": null, "type": "Microsoft.Resources/resourceGroups"}
Create an Azure container registry. It is the service that will store your docker images. Run the following command below:
az acr create --resource-group nodejs-container-app \ --name NodejsExpressAcr --sku Basic
You will see the output below:
{ "adminUserEnabled": false, "anonymousPullEnabled": false, "creationDate": "2021-11-04T01:20:04.834773+00:00", "dataEndpointEnabled": false, "dataEndpointHostNames": [], "encryption": { "keyVaultProperties": null, "status": "disabled" }, "id": "/subscriptions/786180ec-7ebe-4ada-8cd4-4201fd5a426c/resourceGroups/nodejs-container-app/providers/Microsoft.ContainerRegistry/registries/NodejsExpressAcr", "identity": null, "location": "northeurope", "loginServer": "nodejsexpressacr.azurecr.io", "name": "NodejsExpressAcr", "networkRuleBypassOptions": "AzureServices", "networkRuleSet": null, "policies": { "exportPolicy": { "status": "enabled" }, "quarantinePolicy": { "status": "disabled" }, "retentionPolicy": { "days": 7, "lastUpdatedTime": "2021-11-04T01:20:08.687607+00:00", "status": "disabled" }, "trustPolicy": { "status": "disabled", "type": "Notary" } }, "privateEndpointConnections": [], "provisioningState": "Succeeded", "publicNetworkAccess": "Enabled", "resourceGroup": "nodejs-container-app", "sku": { "name": "Basic", "tier": "Basic" }, "status": null, "systemData": { "createdAt": "2021-11-04T01:20:04.834773+00:00", "createdBy": "odirionye@gmail.com", "createdByType": "User", "lastModifiedAt": "2021-11-04T01:20:04.834773+00:00", "lastModifiedBy": "odirionye@gmail.com", "lastModifiedByType": "User" }, "tags": {}, "type": "Microsoft.ContainerRegistry/registries", "zoneRedundancy": "Disabled"}
Setup the Azure container app and deploy the Nodejs app following the steps below:
Install the Azure Container apps extension.
az extension add --source https://workerappscliextension.blob.core.windows.net/azure-cli-extension/containerapp-0.2.0-py2.py3-none-any.whl
Register the Microsoft.Web
namespace.
az provider register --namespace Microsoft.Web
Export the following enviroment variables.
export RESOURCE_GROUP=nodejs-container-appexport LOCATION=northeuropeexport LOG_ANALYTICS_WORKSPACE=nodejs-container-app-logsexport CONTAINERAPPS_ENVIRONMENT=container-app-env
Create an app environment for the Nodejs app following the steps below:
Create a new Log Analytics workspace.
az monitor log-analytics workspace create --resource-group $RESOURCE_GROUP --workspace-name $LOG_ANALYTICS_WORKSPACE
Get the Log Analytics Client ID and client secret.
Run this command first
LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az monitor log-analytics workspace show --query customerId -g $RESOURCE_GROUP -n $LOG_ANALYTICS_WORKSPACE --out tsv`
Run this command next
LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET=`az monitor log-analytics workspace get-shared-keys --query primarySharedKey -g $RESOURCE_GROUP -n $LOG_ANALYTICS_WORKSPACE --out tsv`
Create the environment.
az containerapp env create \--name $CONTAINERAPPS_ENVIRONMENT \--resource-group $RESOURCE_GROUP \--logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \--logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET \--location "$LOCATION"
You should see an output like this:
Command group 'containerapp env' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus{ "aksResourceId": null, "appLogsConfiguration": { "destination": "log-analytics", "logAnalyticsConfiguration": { "customerId": "1b409e45-6bdb-496a-9f6c-78446cdf07b3", "sharedKey": null } }, "arcConfiguration": null, "containerAppsConfiguration": { "aciSubnetResourceName": null, "appSubnetResourceId": null, "controlPlaneSubnetResourceId": null, "daprAIInstrumentationKey": null, "subnetResourceId": null }, "defaultDomain": "politesky-63867f57.northeurope.azurecontainerapps.io", "deploymentErrors": null, "extendedLocation": null, "id": "/subscriptions/786180ec-7ebe-4ada-8cd4-4201fd5a426c/resourceGroups/nodejs-container-app/providers/Microsoft.Web/kubeEnvironments/nodejs-container-app-env", "internalLoadBalancerEnabled": null, "kind": "containerenvironment", "kubeEnvironmentType": "managed", "location": "northeurope", "name": "nodejs-container-app-env", "provisioningState": "Succeeded", "resourceGroup": "nodejs-container-app", "staticIp": "13.74.44.20", "tags": null, "type": "Microsoft.Web/kubeenvironments"}
Now! we can fire up the container app following the steps below:
Run the following command.
az containerapp create \--name container-app \--resource-group $RESOURCE_GROUP \--environment $CONTAINERAPPS_ENVIRONMENT \--image docker.io/nerdeveloper/nodejs-containerapp \ # Replace with your image name --target-port 3000 \ # Replace with your custom port--ingress 'external' \--query configuration.ingress.fqdn
This command should return a URL similar to this:
"container-app.victoriousocean-4a8f30a3.northeurope.azurecontainerapps.io"
Copy the URL into your browser and you should now see an express homepage
Destroy all the services when you are done.
az group delete --name $RESOURCE_GROUP
You should note that Azure Container Apps is not production-ready, but you can still play around with it. Yes! We were able to deploy a simple Nodejs/express app to Azure Container apps and I am as excited you are about this serverless service. You should check out the Docs to learn more. I’d love to hear from you about your experience. Cheerio!
]]>Vue.js is a front-end web application Javascript framework used to build beautiful web user interfaces. In this article, we are going to deploy a Vue.js 3 application to Microsoft Azure Static Web Apps.
Install the Vue CLI.
npm install -g @vue/cli
Generate a Vue.js Hello World Template.
Type in the command below into your CLI
vue create hello-world
You will see the output below
Select Default (Vue 3 Preview) ([Vue 3] babel, eslint)
After selection, you will see the following output below
Select Use NPM
It’s done. Kindly, wait while the template generates.
Test your Vue 3 app on your browser.
Enter the hello-world directory
cd hello-world
Run the following command to start your app
npm run serveORvue serve
Open the link generated in your browser
Install the Microsoft Azure CLI.
For Linux (Ubuntu and Debian):
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
For macOS:
brew update && brew install azure-cli
For Windows (Run as Administator):
Invoke-WebRequest -Uri https://aka.ms/installazurecliwindows -OutFile .\AzureCLI.msi; Start-Process msiexec.exe -Wait -ArgumentList '/I AzureCLI.msi /quiet'; rm .\AzureCLI.msi
Connect your Azure CLI to your Azure Account.
Type in this command to connect your account
az login
The command above will open your browser automatically, asking you to authenticate your Azure account.
You will see the image below when authenticated
Create a Public repository to push your Vue 3 code to GitHub.
echo "# vue-azure" >> README.mdgit initgit add README.mdgit commit -m "first commit"git branch -M maingit remote add origin https://github.com/nerdeveloper/vue-azure.git # add your repo URL heregit push -u origin main
git push
Vue 3 app on GithubCreate a resource group on Azure.
Run this command to create a group in West US.
az group create -l westus -n vue-azure-app
{ "id": "/subscriptions/786180ec-7ebe-4ada-8cd4-4201fd5a426c/resourceGroups/vue-azure-app", "location": "westus", "managedBy": null, "name": "vue-azure-app", "properties": { "provisioningState": "Succeeded" }, "tags": null, "type": "Microsoft.Resources/resourceGroups"}
Now deploy your Vue app to Azure!
Run the following the following command below
az staticwebapp create \ -n my-vue-azure-app \ -g vue-azure-app \ -s https://github.com/nerdeveloper/vue-azure \ -l westus2 \ -b main \ --app-artifact-location "dist" \ --token [ENTER YOUR GITHUB TOKEN HERE]
Learn how to create a GitHub Token if you don’t have one.
You will see this output below
{ "branch": "main", "buildProperties": null, "customDomains": [], "defaultHostname": "orange-stone-00953e51e.azurestaticapps.net", "id": "/subscriptions/786180ec-7ebe-4ada-8cd4-4201fd5a426c/resourceGroups/vue-azure-app/providers/Microsoft.Web/staticSites/my-vue-azure-app", "kind": null, "location": "West US 2", "name": "my-vue-azure-app", "repositoryToken": null, "repositoryUrl": "https://github.com/nerdeveloper/vue-azure", "resourceGroup": "vue-azure-app", "sku": { "capabilities": null, "capacity": null, "family": null, "locations": null, "name": "Free", "size": null, "skuCapacity": null, "tier": "Free" }, "tags": null, "type": "Microsoft.Web/staticSites"}
It takes about five(5) minutes to deploy. In the output, you will see “defaultHostname”: “orange-stone-00953e51e.azurestaticapps.net”
NB: your defaultHostname will be different from this.
Open the orange-stone-00953e51e.azurestaticapps.net
in your browser.
Hooray! You have successfully deployed a Vue 3 app to Azure. It was somewhat effortless when you have the right tools.
There are a lot of static web frameworks you can deploy to Azure, like Angular, React, and many more. Vue.js is a compelling framework and superfast on the client-side. I’d always bet on Azure Infrastructure, and it’s simplicity. Give Microsoft Azure a Try today!
]]>Previously, I have written an article on How to Create a Kubernetes Cluster on AWS with Kops and Netlify DNS. You should check it out if you need to deploy to AWS. Now let’s dive right in!
Ensure you have kubectl
, kops
, and gcloud
correctly installed; you can check that by running
kubectl
You can also check what version of kubectl
you are running
kubectl version | grep -i GitVersion
You should see an output similar to this:
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"darwin/amd64"}Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-21T23:01:33Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}
You can also check the version for kops
kops version
Check to see gcloud
is installed properly too
gcloud
You can also check the version of gcloud
gcloud version
Login to your gcloud account using the command below
gcloud auth login
Automatically, a browser tab will be opened to authenticate and authorize your access to the google cloud platform. It should look like this.
Go ahead and accept, and you will be redirected to the auth success page. It should look like this.
You have made some significant progress at this point! Let’s keep going
Configure the Google cloud project where kops will deploy the resources.
gcloud config set project [name of the project-id]
Create a Google cloud bucket.
gsutil mb gs://hey-kubernetes
We are creating a bucket because kops
needs to save the state configuration of the provisioned resources somewhere, and it helps track what has been completed, modified, or deleted. Think of this as version control
Export the following environmental variables in the terminal
export KOPS_FEATURE_FLAGS=AlphaAllowGCE # to unlock the GCE featuresexport KOPS_STATE_STORE=gs://hey-kubernetes
Create a Virtual Private Cloud Network
gcloud compute networks create hey-kubernetes --subnet-mode=auto
The command above will create a vpc called hey-kubernetes
where the kubernetes cluster resources will be assigned to while creating such compute engine, firewalls, load balancers, etc.
Spin up a kubernetes cluster using Kops
kops create cluster heykubernetes.k8s.local --zones us-central1-a --node-count 3 --node-size n1-standard-4 --master-size n1-standard-2 --vpc=hey-kubernetes
You should get a confirmation output that looks like this
Cluster configuration has been created.Suggestions: * list clusters with: kops get cluster * edit this cluster with: kops edit cluster heykubernetes.k8s.local * edit your node instance group: kops edit ig --name=heykubernetes.k8s.local nodes-us-central1-a * edit your master instance group: kops edit ig --name=heykubernetes.k8s.local master-us-central1-aFinally configure your cluster with: kops update cluster --name heykubernetes.k8s.local --yes --admin
Run the command below to configure the cluster for provisioning
kops update cluster --name heykubernetes.k8s.local --yes --admin
After a few minutes, you should get the final output that is similar to this
Cluster is starting. It should be ready in a few minutes.Suggestions: * validate cluster: kops validate cluster --wait 10m * list nodes: kubectl get nodes --show-labels * ssh to the master: ssh -i ~/.ssh/id_rsa ubuntu@api.heykubernetes.k8s.local * the ubuntu user is specific to Ubuntu. If not using Ubuntu please use the appropriate user based on your OS. * read about installing addons at: https://kops.sigs.k8s.io/operations/addons.
You can validate to cluster to know when it is fully functional
kops validate cluster --wait 10m
You should see the output below when it is ready
INSTANCE GROUPSNAMEROLEMACHINETYPEMINMAXSUBNETSmaster-us-central1-aMastern1-standard-211us-central1nodes-us-central1-aNoden1-standard-433us-central1NODE STATUSNAMEROLEREADYmaster-us-central1-a-p7qnmasterTruenodes-us-central1-a-1p8bnodeTruenodes-us-central1-a-9hdcnodeTruenodes-us-central1-a-snq9nodeTrueYour cluster heykubernetes.k8s.local is ready.
Hurray! You have a Kubernetes Cluster running. If you reach here, you are fantastic. Let’s try something way cool; spinning up an Nginx
image.
Run an nginx image using kubectl
kubectl run hey --image nginx
You should see output similar to this
pod/hey created
You can check to see if the Nginx is running using this command below
kubectl get pod
You will get this output
NAME READY STATUS RESTARTS AGEhey 1/1 Running 0 79s
You can delete the Nginx container by running
kubectl delete pod hey
Destroy the Kubernetes cluster
kops delete cluster heykubernetes.k8s.local --yes
Next, Destroy the VPC.
gcloud compute networks delete hey-kubernetes
Finally, Destroy the bucket.
gsutil rm -r gs://hey-kubernetes
Please, some resources take a few minutes to create or destroy. Do not delete the bucket when the Kubernetes cluster hasn’t been deleted completely.
Either you are a beginner at Backend, SRE/DevOps, Infrastructure engineering, this post was created for you to get your feet wet with Kubernetes. I can’t wait to see what you do with Kubernetes. Cheerio!
]]>In recent times, it has become insanely difficult to create multiple resources simultaneously. For example if you were creating 50 Google Cloud storage buckets and 50 virtual machines on Google Cloud at the same time. Aha! you can imagine how difficult and exhausted you’d be.
In this article, I’ll introduce you to the basics of Terraform and how you can safely and predictably create, change, and improve your infrastructure.
Terraform is a cloud-agnostic Infrastructure as code tool (IAC) for building, managing and destroying an infrastructure. This tool can deploy 50 Google cloud Storage buckets and 50 Virtual machines simultaneously.
One of the many things I love about terraform is that you can deploy it to Google Cloud, AWS, Microsoft Azure, and many more cloud providers. It also enhances collaboration among engineers using State Management.
There are three most important commands I use when working with terraform daily. I will explain each of them:
Terraform init
This is the first command that initializes a working directory. It runs when a new configuration is detected. You can run this command as many times as you want.
Terraform plan
This is the second command that creates an overview of what’s going to be executed. It also gives a human-readable output from a state.
Terraform apply
This is the third and last command that executes the plan. For example, this command can provision, modify or/and delete a particular infrastructure.
A basic terraform folder structure looks like the output below:
├── main.tf├── outputs.tf├── providers.tf└── variables.tf
Let’s take a deep dive into each of the following files in the tree structure above.
This contains some set of configurations and/or instructions used to execute the resources on the script.
For example, The code snippet below will deploy a virtual machine on AWS.
resource "aws_instance" "example" {ami = "ami-0c55b159cbfafe1f0"instance_type = "t2.micro"}
This contains another set of configurations that will output properties of the resource created from the state.
For example, the code snippet below prints out the public IP address
when the virtual machine is deployed.
output "public_ip" { value = aws_instance.example.public_ip description = "The public IP of the webserver"}
This tells terraform the Cloud provider you wish to use. The provider can be Google Cloud, AWS, Digital Ocean, or others.
For example, the code snippets below tell terraform that you want to use AWS as your cloud provider and select the region you want to deploy to.
provider "aws" { region = "eu-west-1"}
It will help if you think of this file as one used to set environmental variables. This allows your code to be more configurable and DRY.
For example, the code snippet below allows you to set env variables for the AWS region you want to deploy to.
variable "aws-region" { default = "eu-west-1" type = string description = "The AWS Region to deploy"}
For more understanding, let’s break down the attributes of the variables. tf
file.
This is where you provide the value of the variable.
This allows you to set type constraints on it like string, bool, list, maps, list(maps(strings)), and many more.
This is more like sentences used to document the use of variables.
Now that we’ve covered the basics let’s explore more technical features you can achieve with terraform.
It’s like a container or multiple collections, and resources off terraform files. They are small reusable configurations for grouping resources.
Terraform has a ton of functions that can be used for transforming data. A few popular functions are:
This is used to return the highest number in a set.
max([1, 5, 2]...)Output:5
This is used to remove any character in a string.
trim("?!obinna?!", "!?")Output:obinna
This takes two or more different list and joins them as one.
concat(["james", ""], ["hey", "obinna"])Output:[ "james", "", "hey", "obinna",]
This adds to or more number together
> sum([2, 2, 2, 2])Output:8
This article only covers the tip of the iceberg in terraforming. There are several other resources out there that will guide you on deploying your first resource via terraform.
You should check out these:
Thanks for reading!
]]>GKE stands for Google Kubernetes Engine. It is a service offered by Google that ensures you can run containerized workloads at scale on the Cloud. It can deploy, manage and scale containerized applications, powered by Kubernetes. Now, we dive!
This is an essential component in any highly available system. It entails removing and adding services based on demand. GKE ships with autoscaling features called:
This is another essential component for running production workloads. Even as human beings, it’s vital we take care of our health, so why not our application?! There are two health checks on Kubernetes, namely:
This happens to be the most underrated components when running a production workload. Kubernetes is not SECURED BY DEFAULT, but there are components built-in for configuring how secure you want it to be.
This has to do with keeping an app or service up and running without any form of interruption or downtime.
Chaos engineering in cloud computing has to do with experimenting on a system to build resilience to withstand unexpected conditions in production environments. Running production Workload is not easy. It is almost impossible to predict the workloads or traffic of any system to avoid an outage. Some tools that can assist with these include:
This is another crucial best practice. It gives a better understanding of the description of a pod and service deployed on the Kubernetes Cluster.
As Kubernetes grows, there will be more issues and how to improve on those issues. I generally advise that if you set up a GKE cluster, let the minimum in the number of nodes be three(3), and it should be deployed regionally for HA.
I hope this article gives you a better insight how on how to build confidence in your system. Kindly check out the CIS Kubernetes Benchmark to get more knowledge about securing your Kubernetes Cluster. Happy Building!
]]>The goal of this article is to show you different services and popular Google Cloud Platform tools, what they do, and alternative services in other Cloud providers (like AWS, IBM Cloud, Digital Ocean, and Microsoft Azure). If you’re thinking of migrating to an alternative provider for any reason; either for learning purposes or scaling. This article is for you!
Well, let’s delve right into it. Here are the top GCP services I use frequently, and a list of alternatives in other Cloud providers you can peruse:
This is a fully managed platform that allows you to scale large applications without worry about the underlying infrastructure. App Engine supports a variety of programming languages like PHP, Go, Python, Java, etc. The brilliant thing about the app engine it supports docker images out of the box.
Alternatives:
Cloud provider | Name of service |
---|---|
AWS | ElasticBeanStalk |
AZURE | App Service |
This is a managed platform that ensures you can run containerized workloads. It can deploy, manage and scale containerized applications, powered by Kubernetes.
Alternatives:
Cloud provider | Name of service |
---|---|
AWS | Elastic Kubernetes Service |
AZURE | Azure Kubernetes Service |
IBM | Cloud Kubernetes Service |
This is a low-latency, geographically distributed network running at Google Datacenters. This service harnesses Edge computing which brings contents closer to your end-users.
Alternative:
Cloud provider | Name of service |
---|---|
AWS | CloudFront |
AZURE | Azure CDN |
DIGITAL OCEAN | Spaces |
This is an Infrastructure-as-a-Service(IAAS) that runs on Google’s Infrastructure. It allows clients to run workloads on top of physical hardware. Youtube, Google Service runs on top of the service. You can run any operating system like Ubuntu, Debian, CentOS, and many more.
Alternatives:
Cloud provider | Name of service |
---|---|
AWS | EC2 Instance |
AZURE | Azure Virtual Machines |
This is a fully managed platform that brings serverless to containers. it was built for deploying and running HTTP based containerized applications without provisioning and maintaining machines.
Alternatives:
Cloud provider | Name of service |
---|---|
AWS | Fargate |
This is a CLI tool used to create, manage resources on Google Cloud. It makes it easy to perform cloud task, like creating a bucket, deploying an app to app engine, and deleting a Virtual machine
Alternatives:
Cloud provider | Name of service |
---|---|
AWS | AWS CLI |
AZURE | AZURE CLI |
This is a resilient, global Domain name system(DNS) service that publishes your domain names to the global DNS effectively. Cloud DNS offers private and public DNS zones.
Alternatives:
Cloud provider | Name of service |
---|---|
AWS | Route53 |
AZURE | Azure DNS |
This was previously called StackDriver. This is another amazing service from Google Cloud. It collects metrics, logs, and traces for your application. It can also be used for profiling and debugging for observability.
Alternatives:
Cloud provider | Name of service |
---|---|
AWS | Cloudwatch, Xrays |
AZURE | Azure Monitor, Application Insights |
It is a fully managed, in-memory cache that handles millions of requests per second. It can be deployed as a standalone service or with replication. It comes, with security and configuration benefits.
Alternatives:
Cloud provider | Name of service |
---|---|
AWS | Elastic Cache |
AZURE | Azure Redis |
This is a database-as-a-service (D-A-A-S) managed database service that assists in setting up, maintaining, and managing relational databases. There is a variety of cloud SQL databases like PostgreSQL, MYSQL, etc.
Alternatives:
Cloud provider | Name of service |
---|---|
AWS | RDS |
AZURE | Azure Database for PostgreSQL, Azure Database form MySQL |
This service lets administrators authorize who can do specific actions on a specific resource, thereby giving control over cetin cloud resources.IAM stands for Identity and Access management.
Alternatives:
Cloud provider | Name of service |
---|---|
AWS | IAM |
AZURE | Active Directory |
This article does not suggest that the Google Cloud Platform isn’t great but focused on assisting engineers to use similar products on other cloud providers. I cannot wait to see the next thing you will build with any of the those services I mentioned.
]]>In this Live-stream, we touched a lot of Azure services, and their use cases such as Azure VMs, Azure Kubernetes Engine, Azure DNS, Azure App Service, Azure Blob Storage, Azure Database for PostgresSQL and MySQL Servers, and many more.
Microsoft Azure offers two hundred dollars ($200) worth of credits for thirty (30) days, Twelve (12) months of free services, and over twenty-five (25) free products. These free services don’t expire. Learn more
Click here to view the transcript of the chat.
]]>In this Live-stream, we touched a lot of Google Cloud Platform services, and their use cases such as Google Compute Engine, Google Kubernetes Engine, Google DNS, Google IAM, Google Cloud Storage, Google Cloud run, and many more.
Google Cloud offers three hundred dollars ($300) and over twenty (20) free products. These free services don’t expire. Learn More
Click here to view the transcript of the chat.
]]>In this Live-stream, we covered Several AWS services and use cases such as IAM, EC2, DynamoDB, S3, Route 53, Certificate manager, and many more.
Get free credits to test and deploy any service and tool on AWS: COUPON LINK
Click here to view the transcript of the chat.
Click here to estimate the latency from your browser to each AWS region.
How Twitter Uses Redis To Scale - 105TB RAM, 39MM QPS, 10,000+ Instances
The world’s largest DDoS attack took GitHub offline for fewer than 10 minutes
Kubernetes Operations(Kops) is a simple and easy way to get Kubernetes Cluster up and
running. It is Production Ready, can be used for upgrades and management of your Kubernetes Cluster.
Netlify is a web hosting infrastructure and automation technology platform. They offer services like CDN, Continuous deployment, 1-click HTTPS, and many more.
Before we continue, you should have the following:
Login to your Netlify account, go to Domains
and click on Add or register domain
Register your domain
Enter your domain
Click on continue
Copy your nameservers from your dashboard
Update it on the domain name registrar
You can create an S3 bucket by running:
aws s3api create-bucket --bucket k8.obinna.tech
Setup versioning for your S3 bucket, this will enable you recover previous versions of the cluster.
aws s3api put-bucket-versioning --bucket k8.obinna.tech --versioning-configuration Status=Enabled
Export the S3 bucket as an environment variable
export KOPS_STATE_STORE=s3://k8.obinna.tech
Run this command in your Terminal
ID=$(uuidgen) && \aws route53 create-hosted-zone --name k8.obinna.tech --caller-reference $ID | jq .DelegationSet.NameServers
The command above is going to create a hosted zone and then, it will output a set of values that are to be used to create a DNS record for your domain on Netlify.
It gives the following output below:
[ "ns-175.awsdns-21.com", "ns-1044.awsdns-02.org", "ns-560.awsdns-06.net", "ns-1732.awsdns-24.co.uk"]
Copy the output and update on your Netlify’s Domain DNS settings dashboard
Update your DNS Record
You should have something like this when you are done
Run a dig
command to ensure your DNS has propagated
dig NS k8.obinna.tech
This will show the output below:
;; ANSWER SECTION: k8.obinna.tech. 3600 IN NS ns-175.awsdns-21.com. k8.obinna.tech. 3600 IN NS ns-560.awsdns-06.net. k8.obinna.tech. 3600 IN NS ns-1732.awsdns-24.co.uk. k8.obinna.tech. 3600 IN NS ns-1044.awsdns-02.org.
Run this command to create a Kubernetes Cluster
kops create cluster --name k8.obinna.tech \--zones us-east-1a --node-count=3 \--node-size=t2.medium --master-size=t2.small \--yes
It gives the following output below when it is done:
Cluster is starting. It should be ready in a few minutes. Suggestions: * validate cluster: kops validate cluster * list nodes: kubectl get nodes --show-labels * ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.obinna.tech * the admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS. * read about installing addons at: https://github.com/kubernetes/kops/blob/master/docs/operations/ addons.md.
Let us briefly discuss the following flags:
name
: This is the name used to create the hosted zone. It is important to pass this in order for kops to communicate with the Route 53 API.
zones
: This is the availability zone specified for Kops to create the Kubernetes cluster in that region. Always specify a zone in the same region with your S3 Bucket
.
node-count
: This is the number of worker Nodes you want Kops to create.
node-size
: This is the size of the AWS EC2 instance which is popularly known as Virtual Machine
that will be used to create the worker nodes.
master-size
: This is the size of the AWS EC2 instance that will be used as the master node. This is an EC2 instance that will control and send requests to the worker nodes.
yes
: This is a confirmation flag that allows Kops to go ahead and create the Kubernetes Cluster.
Run this command to see if the Kubernetes Cluster is ready for workload.
kubectl get nodes
This will output the following:
NAME STATUS ROLES AGE VERSIONip-172-20-40-210.ec2.internal Ready node 16m v1.16.9ip-172-20-43-35.ec2.internal Ready node 16m v1.16.9ip-172-20-49-102.ec2.internal Ready master 18m v1.16.9ip-172-20-62-148.ec2.internal Ready node 16m v1.16.9
kops delete cluster --name k8.obinna.tech --yes
Big Ups! You were able to create and destroy a Kubernetes cluster. I know it was a long road but you finally made it to end.
If you run into any issues or have suggestions, kindly drop a comment and I will get back to you as soon as possible and don’t forget a drop a response below if you liked this content.