Sunday, 22 November 2020

Change App Icon in React Native for Android and iOS

 

Change App Icon in React Native for Android and iOS

App Icon for Android and iOS

Application Icon is the Unique identification of the App. Application Icon is the main thing that the user always remembers. In most cases, the user remembers the application icon instead of the application name. App Icon can be your brand logo or anything else but should define the purpose of your application.

In this example, we will see how to change the Application Icon in React Native. This example will cover the updating of the icon for both Android and iOS. Please note after setting the application icon if you are recreating/updating the platform (Android / iOS directory in the project) then you have to set up the icon again.

To change the application icon here is an example below. Let’s get started.

First of all, we have to create an App.

Look at the Old Icon

If you are developing an app (either in Native or Hybrid) you will be provided a default App Icon for both the platform. If you can run the application then have a look at the current icon.

To run the application in the Android

react-native run-android

To run the application in the iOS

react-native run-ios

How to Make the Multiple Size Icon?

To change this default icon we need our own application icon in different sizes for the different devices. You should create an App Icon by your self or you can download it from the Google Images only when it is free with no copyright.

We are going to set this Icon as our App Icon

(Please use 1024px * 1024px size image)

Once you have your App Icon Ready then you have to make multiple size Icon for both Android and iOS.

makeappicon.com will also help you to provide App Icon for Both. These guys are doing a great job. You just need to upload your Icon on their website and they will provide multiple sized icons arranged in a proper folder structure.

Other than that you can also explore:

1. Icon Set Creator for iOS

2. Android Asset Studio for Android.

3. resizeappicon.com for both Android and IOS.

Setting App Icon for Android Applications

To change the Android application icon copy all the minmap-* directory from the android directory of downloaded makeappicon zip.

Now navigate to res directory of your project (YourProject -> android -> app -> src -> main -> res) and replace the default icons with newly downloaded icons.

Now open the terminal again and run the project again using

react-native run-android

Here we can see the Application Icon has been changed

Setting App Icon for iOS Applications Directly

To change the application icon for the iOS copy all the content of AppIcon.appiconset from the ios>AppIcon.appiconset directory of downloaded makeappicon zip.

After copying all the icons from the downloaded icons paste the same in your projects AppIcon.appiconset directory (YourProject -> ios -> YourProject -> Images.xcassets -> AppIcon.appiconset) if it ask to replace the JSON then click yes to replace.

Open the terminal again and run the project again using

react-native run-ios

Here we can see the Application Icon has been changed

Setting App Icon for iOS Applications using XCode

You can also do the same for the iOS from the Xcode you just need to open the project in Xcode by clicking -> Your_Project -> ios -> YourProject.xcworkspace file in.

After opening the project expand the project and find the Images.xcassets click onto it

You will see a new window with some empty icon slots.

Now according to 2x3x size in PT simply opens your downloaded icon folder -> ios ->  AppIcon.appiconset and with the same PT size with the same 2x and 3x size drag the icons here.

Open the terminal again and run the project again using

react-native run-ios

This is how you can change the Icon of your React Native Application for Android and iOS both.

Alternate Way to Change App Icon in React Native Using Command Line Interface

If you are using MAC or Ubuntu you can also see the alternate way below. If you are the windows user then you have to use the above method only.

Motivation

In an Ionic App development process, we can use a single command to change the Icon so why not in React Native also? So I started finding different ways to do that and finally I got some success in RN ToolBox. Let’s see how to change the App Icon using Command Line Interface.

Installation of RN ToolBox

RN ToolBox will help you to set up your App Icon using Command Line Interface but for that, you need node > version 6.  If you are using correct node version then you can install the generator using

npm install -g yo generator-rn-toolbox

To generate your icons, the generator-rn-toolbox uses ImageMagick. Ubuntu user can skip this but for Mac users run

brew install imagemagick

Set the Icon for Android and iOS Application

After the installation of the required tools, we need an application icon. Min 200px x 200px size is recommended.

Now after making the icon, we have to run the following command to set up the icon for our application

yo rn-toolbox:assets --icon <path to your icon>

You will be asked for the name of your react-native project, just copy and paste the name of your application.

You will be asked to replace Contents.json file, input y and hit enter

Congratulation!! you have successfully updated your App Icon from the command line.

Android

iOS

This is how you can change the App Icon using Command Line Interface. If you have anything else to share please comment below or contact us here.

Hope you liked it:)

Sunday, 12 January 2020

The Advantages of Using Kubernetes and Docker Together

The Advantages of Using Kubernetes and Docker Together

Christian Melendez Developer Tips, Tricks & Resources
You might be hearing a lot about Kubernetes and Docker—so much that you might be wondering which one is better.
Well, there is no “better” because these aren’t equivalent things. Docker is like an airplane and Kubernetes is like an airport. You wouldn’t ask “Which should I use to travel—airport versus airplane?” So it goes with Docker and Kubernetes. You need both.
In this post, we’ll run through a deployment scenario, how containers and orchestrators can help, and how a developer would use them on a daily basis. You’ll walk away from this post with an understanding of how all the pieces of the puzzle fit together.

Everything starts with your local environment

So let me start with a typical day in the life of someone who struggles through every deployment. Then I’ll explain how these two technologies can help. For practical purposes, we’ll talk about the fictional developer John Smith. John’s a developer working for a startup, and he’s responsible for deploying his code to a live environment.
John has two apps: one in .NET Core and another in Node.js. He struggles every time a new version of the language, framework, or library comes out and he has to run an upgrade. The problem is when things aren’t compatible with what he’s installed. When something’s not working, he just installs, uninstalls, updates, or removes until finally things get back up and running. The struggle becomes even bigger when he has to push a new change after doing all of that to another environment. It’s kind of hard to remember all the steps when we’re in a rush.
One solution could be for him to work with virtual machines (VMs). That way, he can isolate all dependencies and avoid affecting any existing apps and their dependencies
While that could work, it doesn’t scale. Why? Because every time something changes, he has to take a new snapshot. And then he has to somehow organize all the different versions of those VM snapshots. He’ll still need to deploy changes in code and any dependencies to other environments. Now, he can screw things up in other environments too and then fix it, and that’s okay. But when we’re talking about production, things get risky. He has to work with production-like environments to ease deployments and reduce risk. That’s hard to do.
Even having automation in place, deployments might be too complex or painful. Maybe John even has to spend a whole weekend doing deployments and fixing all sorts of broken things.
We all wish deployments could be as boring as pushing a button. The good news is that that’s where Docker and Kubernetes come into play.

Use Docker to pack and ship your app

Kubernetes and Docker
So, what is Docker anyway?
Docker is a company that provides a container platform. Containers are a way to pack and isolate a piece of software with everything that it needs to run. I mean “isolate” in the sense that containers can assign separate resources from the host where it’s running. You might be thinking this sounds pretty similar to VMs, but the difference is that containers are more lightweight: they don’t need another OS to make software run. Containers let you be more agile and build secure and portable apps, which lets you save some costs in infrastructure when done well.
I know that sounds like a textbook definition, so let’s see how this is beneficial by following the day in the life of John.
Let’s say John decides to start his containers journey. He learns that Docker containers work with base images as their foundation to run an app. A base image and all its dependencies are described in a file called “Dockerfile.”  A Dockerfile is where you define something like a recipe that you usually have in docs (or in your mind) for anyone who wants to run your app. He starts with the .NET Core app, and the Dockerfile looks like this. Take a look:
FROM microsoft/aspnetcore-build:2.0 AS build-env
WORKDIR /app

# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore

# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out

# Build runtime image
FROM microsoft/aspnetcore:2.0
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "hello.dll"]
As you can see, it’s as if you were programming. The only difference is that you’re just defining all dependencies and declaring how to build and run the app.
John needs to put that file in the root of the source code and run the following command:
docker build -t dotnetapp .
This command will create an image with the compiled code and all of its dependencies to run. He’ll only do the “build’ once because the idea is to make the app portable to run anywhere. So when he wants to run the app, only Docker needs to be installed. He just needs to run the following command:
docker run -d -p 80:80 dotnetapp
This command will start running the app on port 80 of the host. It doesn’t matter where he runs this command. As long as port 80 isn’t in use, the app will work.
John is now ready to ship the app anywhere because he’s packed it in a Docker container.
So why is this better? Well, John doesn’t have to worry about forgetting what he installed on his local computer or on any other server. When the team grows, a new developer will rapidly start coding. When John’s company hires an operations guy, the new hire will know what exactly what’s included in the container. And if they want to do an upgrade of the framework or some dependency, they’ll do it without worrying about affecting what’s currently working.
Use Docker to pack and ship your app without worrying too much about whether the app will work somewhere else after you’ve tested it locally. If it works on your machine, it will work on others’ machines.

Use Kubernetes to deploy and scale your app

So, John now just needs to go to each of the servers where he wants to ship the app and start a container. Let’s say that, in production, he has ten servers to support the traffic load. He has to run the previous command on all the servers. And if for some reason the container dies, he has to go to that server and run the command to start it again.
Wait. This doesn’t sound like an improvement, right? It’s not much different than spinning up VMs. When something goes down, he’ll still need to manually go and start containers again. He could automate that task too, but he’ll need to take into consideration things like health checks and available resources. So here’s where Kubernetes comes into play.
Kubernetes, as their site says, “is an open-source system for automating deployment, scaling, and management of containerized applications.”There are more of its type, but Kubernetes is the most popular one right now. Kubernetes does the container orchestration so you don’t have to script those tasks. It’s the next step after containerizing your application, and its how you’ll run your containers at scale in production.
Kubernetes will help you to deploy the same way everywhere. Why? Because you just need to say, in a declarative language, how you’d like to run containers. You’ll have a load balancer, a minimum amount of containers running, and the ability to scale up or down only when it’s needed—things that you’d otherwise need to create and configure separately. You’ll have everything you need to run at scale, and you’ll have it all in the same place. But it’s not just that. You can also have the ability now to have your own Kubernetes cluster running locally, thanks to Minikube. Or you can use Docker, because Docker now officially supports Kubernetes.
So, coming back to John. He can define how he wants to deploy an app called “dotnetapp” at scale.
Take a look at the “dotnetapp-deployment.yaml” file, where John defines how to do deployments in a Kubernetes cluster, including all its dependencies at a container level. In this case, besides launching the dotnetapp, it’s also launching the database using a container. Here’s how the file looks:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: dotnetapp
spec:
replicas: 3
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: dotnetapp
spec:
containers:
- name: dotnetapp
image: johndoe/dotnetapp:1.0
ports:
- containerPort: 80
resources:
requests:
cpu: 250m
limits:
cpu: 500m
env:
- name: DB_ENDPOINT
value: "dotnetappdb"
---
apiVersion: v1
kind: Service
metadata:
name: dotnetapp
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: dotnetapp
John now just needs to run this command to deploy the app in any Kubernetes cluster, locally or in another cluster:
kubectl apply -f .\dotnetapp-deployment.yaml
This command will create everything that’s needed, or it will just apply an update, if there is one.
He can run the exact same command on this computer or any other environment, including production, and it will work the same way everywhere. But it’s not just that. Kubernetes constantly checks the state of your deployment according to the yaml definition you use. So if a Docker container goes down, Kubernetes will spin up a new one automatically. John no longer has to go to each server where the container failed to start it up again; the orchestrator will take care of that for him. And there will be something monitoring the stake to make sure it’s compliant—meaning it’s running as expected—all the time.
That’s how you could easily get to doing several deployments a day that take around five minutes.

You’ll deliver quickly, consistently, and predictably

Now you know what Docker and Kubernetes are—and not just in concept. You also have a practical perspective. Both technologies use a declarative language to define how they will run and orchestrate an app.
You’ll be able to deliver faster, but more importantly, you’ll deliver in a consistent and predictable manner. Docker containers will help you to isolate and pack your software with all its dependencies. And Kubernetes will help you to deploy and orchestrate your containers. This lets you focus on developing new features and fixing bugs more rapidly. Then you’ll notice, at some point, your deployments stop being a big ceremony.
So, the main thing to remember is this: when you combine Docker and Kubernetes, confidence and productivity increase for everyone.

setting up kubernetes with multi node.

What is Kubernetes?

Kubernetes is a free and open-source container management system that provides a platform for deployment automation, scaling, and operations of application containers across clusters of host computers. With Kubernetes, you can freely make use of the hybrid,on-premise, and public cloud infrastructure in order to run deployment tasks of your organization.
In this tutorial, we will explain how to install Kubernetes on an Ubuntu system and also deploy Kubernetes on a two-node Ubuntu cluster.
The commands and procedures mentioned in this article have been run on an Ubuntu 18.04 LTS system. Since we will be using the Ubuntu command line, the Terminal, for running all the commands, you can open it either through the system Dash or the Ctrl+Alt+T shortcut.

Kubernetes Installation

The two-node cluster that we will be forming in this article will consist of a Master node and a Slave node. Both these nodes need to have Kubernetes installed on them. Therefore, follow the steps described below to install Kubernetes on both the Ubuntu nodes.

Step 1: Install Docker on both the nodes

Install the Docker utility on both the nodes by running the following command as sudo in the Terminal of each node:
$ sudo apt install docker.io
Installing Docker
You will be prompted with a Y/n option in order to proceed with the installation. Please enter Y and then hit enter to continue. Docker will then be installed on your system. You can verify the installation and also check the version number of Docker through the following command:
$ docker --version
Check Docker version

Step 2: Enable Docker on both the nodes

Enable the Docker utility on both the nodes by running the following command on each:
$ sudo systemctl enable docker
Enable Docker service

Step 3: Add the Kubernetes signing key on both the nodes

Run the following command in order to get the Kubernetes signing key:
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
Add the Kubernetes signing key
If Curl is not installed on your system, you can install it through the following command as root:
$ sudo apt install curl
Install Curl
You will be prompted with a Y/n option in order to proceed with the installation. Please enter Y and then hit enter to continue. The Curl utility will then be installed on your system.

Step 4: Add Xenial Kubernetes Repository on both the nodes

Run the following command on both the nodes in order to add the Xenial Kubernetes repository:
$ sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
Add Xenial Kubernetes Repository

Step 5: Install Kubeadm

The final step in the installation process is to install Kubeadm on both the nodes through the following command:
$ sudo apt install kubeadm
Install Kubeadm
You will be prompted with a Y/n option in order to proceed with the installation. Please enter Y and then hit enter to continue. Kubeadm will then be installed on your system.
You can check the version number of Kubeadm and also verify the installation through the following command:
$ kubeadm version
Check Kubeadm version

Kubernetes Deployment

Step 1: Disable swap memory (if running) on both the nodes

You need to disable swap memory on both the nodes as Kubernetes does not perform properly on a system that is using swap memory. Run the following command on both the nodes in order to disable swap memory
$ sudo swapoff -a
Disable swap space

Step 2: Give Unique hostnames to each node

Run the following command in the master node in order to give it a unique hostname:
$ sudo hostnamectl set-hostname master-node
Run the following command in the slave node in order to give it a unique hostname:
$ hostnamectl set-hostname slave-node

Step3: Initialize Kubernetes on the master node

Run the following command as sudo on the master node:
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
The process might take a minute or more depending on your internet connection. The output of this command is very important:
Initialize Kubernetes on the master node
Please note down the following information from the output:
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.100.6:6443 --token 06tl4c.oqn35jzecidg0r0m --discovery-token-ca-cert-hash sha256:c40f5fa0aba6ba311efcdb0e8cb637ae0eb8ce27b7a03d47be6d966142f2204c
Now run the commands suggested in the output in order to start using the cluster:
Start Kubernetes Cluster
You can check the status of the master node by running the following command:
$ kubectl get nodes
Get list of nodes
You will see that the status of the master node is “not ready” yet. It is because no pod has yet been deployed on the master node and thus the Container Networking Interface is empty.

Step 4: Deploy a Pod Network through the master node

A pod network is a medium of communication between the nodes of a network. In this tutorial, we are deploying a Flannel pod network on our cluster through the following command:
$ sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Deploy a Pod Network

Use the following command in order to view the status of the network:
$ kubectl get pods --all-namespaces
Check network status
Now when you see the status of the nodes, you will see that the master-node is ready:
$ sudo kubectl get nodes
Get nodes

Step 5: Add the slave node to the network in order to form a cluster

On the slave node, run the following command you generated while initializing Kubernetes on the master-node:
$ sudo kubeadm join 192.168.100.6:6443 --token 06tl4c.oqn35jzecidg0r0m --discovery-token-ca-cert-hash sha256:c40f5fa0aba6ba311efcdb0e8cb637ae0eb8ce27b7a03d47be6d966142f2204c
Add the slave node to the network
Now when you run the following command on the master node, it will confirm that two nodes, the master node, and the server nodes are running on your system.
$ sudo kubectl get nodes
This shows that the two-node cluster is now up and running through the Kubernetes container management system.
In this article, we have explained the installation of the Kubernetes container management system on two Ubuntu nodes. We have then formed a simple two-node cluster and deployed Kubernetes on it. You can now deploy and use any service such as Nginx server or the Apache container to make use of this clustered network.