Wednesday, 7 September 2022

Building a case study

 

Building an effective content marketing strategy that can take your prospects through every stage of the buyer's journey means creating a variety of content.

From relevant, informative blog content to engaging webpages, landing pages, whitepapers, and emails, a comprehensive content marketing strategy should run deep.

One powerful, but often underused, piece of content is the case study.

What Is a Case Study?

A case study is a special type of thought leadership content that tells a story.

Case studies are narratives that feature real world situations or uses of products or services to demonstrate their value. A well written case study will follow a customer as they define a problem, determine a solution, implement it, and reap the benefits.

Case studies offer readers the ability to see a situation from the customer's perspective from beginning to end. 

Why Case Studies Are Important

A marketing case study is one of the most compelling content items in your sales funnel.

It’s the perfect way to guide people into and through the decision phase, when they have the best options laid out on the table and they’re ready to puzzle through that final selection.

Because of this, case studies are uniquely useful as bottom of the funnel content.

case studies are bottom of the funnel content

By the time prospects are ready to read case studies, they have a nuanced grasp of the problem in front of them. They also have a good selection of potential solutions and vendors to choose from.

There may be more than one option that’s suitable for a given situation. In fact, there usually is. But there’s just one option that fits the prospect best. The challenge is figuring out which one.

Since B2B decision makers aren’t mind readers, they need content to bridge the gap between “what they know about your solution” and “what they know about their own business.” The case study does that by showing how a similar customer succeeded.

The more similar the prospect is to the customer in the case study, the more striking it will be.

For that reason, you might want to have a case study for every buyer persona you serve. And naturally, case studies pertain to specific products or services, not your whole brand.

So, you could find yourself with multiple case studies for each buyer type.

However, the effort is worth it, since case studies have a direct impact on sales figures.

bofu-fill-pipeline

How Long Should a Case Study Be?

via GIPHY

Honestly, the more to-the-point you can be in a case study, the better.

Great case studies should pack a lot of meaning into a small space. In the best examples, your reader can grasp the single main idea of each page in a short paragraph or two.

Each detail should build on the next, so they’ll keep moving forward until the end without getting distracted.

Sure, it’s no Dan Brown novel, but if you do it right, it’ll still be a real page-turner.

Note: Some businesses will have a brief case study in PDF form to use as sales collateral then a longer form, more in-depth version of the same case study on their website. In this case, it can be normal to write a lengthier case study.

Where Should I Put My Case Studies?

via GIPHY

Anywhere you want, really!

Ideally, you should upload case studies somewhere on your website so new leads coming to your site have the opportunity to see just how kickass your business is at driving revenue and results for your current customers.

Whether it's an online case study or a PDF version, making your successes available to the public can prove just how valuable your efforts are.

Plus, make sure every member of your sales team has access to your case studies so they can use them as sales collateral to send to prospects and opportunities! A quick PDF attachment to a sales email can be very convincing.

The Best Case Study Format

Like press releases, case studies often fall into a certain specific format.

While it’s not required that you have all of the possible topics in a particular order, picking a consistent format will help you accelerate production down the road. It also makes your content easier to read.

Many B2B businesses use the following approach:

  • Introduction: sets the stage by providing context for the situation.
  • Challenge: discusses the key problem that the customer was facing.
  • Solution: a basic overview of the product or service the customer used.
  • Benefit: recaps the solution’s top advantages – why it was the right choice.
  • Result: the positive business outcome arising from the solution and benefits.

This formula gives you enough flexibility to highlight what’s most important about your enterprise, solution, and the customer you’re showcasing.

At the same time, it ensures that your team will know exactly what information they need to compile to design case studies in the future.

It also serves as an intuitive trail of breadcrumbs for your intended reader.

How to Write a Case Study

writing-case-study

1. Ask Your Client/Customer for Approval.

This first step is crucial because it sets the layout for your entire case study. 

If your client or customer gives the ok to use their name and information, then you can add as much detail as you want to highlight who they are, what you helped them do, and the results it had.

But, if they would rather remain anonymous or want you to leave out any specific details, you’ll have to find a way to keep your information more generalized while still explaining the impact of your efforts.

2. Gather Your Information.

Like any good story, a marketing case study has a beginning, middle, and end. Or, you could think of it as “before, during, and after.”

Before: The Problem

Your case study will always open by presenting a problem suffered by one of your clients.

This part of the study establishes what’s at stake and introduces the characters – your company, the client company, and whichever individual decision makers speak for each side.

During: The Solution

Once you define the problem, the next step presents your offering, which serves as the answer to the dilemma.

Your product or service is, in a very real sense, the hero of the story. It catalyzes the change, which you describe in terms of your features, advantages, and other differentiators.

After: The Result

In the final step, you discuss the “happy ending” brought about by your solution.

Returning to the “stakes” you established at the very start, you expand on how much better things are thanks to your intervention. You want prospects to imagine themselves enjoying that level of success.

3. Get a Quote.

via GIPHY

Of course, a study about two corporations isn’t very interesting on its own. The best case studies personify the protagonists, including the vendor and the client company, by having plenty of quotes peppered throughout the entire story.

Naturally, the business problem to be solved is the big, bad villain here, so you want the client (and preferably, your own team as well) to weigh in on that problem: How complex it is, what solving it would mean, and what not solving it would cost.

Then, as the situation turns around, testimonials become essential.

Naturally, the longest, most emphatic testimonial should come from the top decision maker. But you should aim to include a glowing quote from many different stakeholders – representing the full cast of “characters” who might be making consensus buying decisions around your solution.

Note: Don’t use a testimonial or quote if your case study is anonymous. 

4. Find Some Compelling Graphics.

A case study isn’t a whitepaper: You shouldn’t be trudging through page after page of text.

In fact, some of the most powerful case studies establish their own vivid, graphics-heavy style – looking a lot more like an infographic, or even a magazine, than traditional B2B marketing collateral.

Color blocks, strong contrasts, skyscraper photography, and hero shots are all on the table when it comes to case studies. The more data you have to convey, the more creative you should be in presenting it so it can be understood at a glance. 




SOURCE

Wednesday, 25 May 2022

Java7 installation on ubuntu 18.04 / ubuntu 20.04 (cmd installation)

 

Download the JDK for Linux 32-bit or 64-bit (for example: jdk-7u80-linux-x64.tar.gz)

  1. Navigate to ~/Downloads:

    cd /home/"your_user_name"/Downloads
    
  2. Create a a directory in /usr/local where java will reside and copy tarball there:

    sudo mkdir -p /usr/local/java
    sudo cp -r jdk-7u80-linux-x64.tar.gz /usr/local/java/
    
  3. Navigate to /usr/local/java:

    cd /usr/local/java
    
  4. Extract the tarball:

    sudo tar xvzf jdk-7u80-linux-x64.tar.gz
    
  5. Check if tarball has been successfully extracted:

    ls –a
    

    You should see jdk1.7.0_80.

  6. Open /etc/profile with sudo privileges:

     sudo nano /etc/profile
    
  7. Scroll down to the end of the file using arrow keys and add the following lines below at the end of /etc/profile file:

     JAVA_HOME=/usr/local/java/jdk1.7.0_80
     JRE_HOME=/usr/local/java/jdk1.7.0_80 
     PATH=$PATH:$JRE_HOME/bin:$JAVA_HOME/bin
    
     export JAVA_HOME
     export JRE_HOME
     export PATH
    
  8. Update alternatives:

    sudo update-alternatives --install "/usr/bin/java" "java" "/usr/local/java/jdk1.7.0_80/bin/java" 1
    sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/local/java/jdk1.7.0_80/bin/javac" 1
    sudo update-alternatives --install "/usr/bin/javaws" "javaws" "/usr/local/java/jdk1.7.0_80/bin/javaws" 1
    sudo update-alternatives --set java /usr/local/java/jdk1.7.0_80/bin/java
    sudo update-alternatives --set javac /usr/local/java/jdk1.7.0_80/bin/javac
    sudo update-alternatives --set javaws /usr/local/java/jdk1.7.0_80/bin/javaws
    
  9. Reload profile:

    source /etc/profile
    
  10. Verify installation:

    java -version
    

    You should receive a message which displays:

    java version "1.7.0_80"
    Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
    Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
    

Wednesday, 18 May 2022

Install Apache Tomcat 7 on CentOS 7 With Letsencrypt SSL Certificate

 

Install Apache Tomcat 7 on CentOS 7 With Letsencrypt SSL Certificate


Apache Tomcat is a web server and servlet container that is used to serve Java applications. Tomcat is an open source implementation of the Java Servlet and JavaServer Pages technologies, released by the Apache Software Foundation.

Configure Tomcat Server to use Letsencrypt

This is a documentation of lessons learned from deploying ODKAggregate tomcat application and Letsencrypt SSL certificate.

The setup was based on CentOS 7 server and Tomcat 7.0.69

Tomcat installation

sudo yum -y install epel-release
sudo yum -y install tomcat tomcat-docs-webapp tomcat-javadoc tomcat-webapps tomcat-admin-webapps

Configure JAVA PATH

sudo yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel
sudo update-alternatives --config java
sudo update-alternatives --config javac

$ ls -l  /usr/lib/jvm

sudo tee -a /etc/bashrc<<EOF
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk
export PATH=$JAVA_HOME/bin:$PATH
EOF

$ source /etc/bashrc
$ echo $JAVA_HOME
$ java -version

Tomcat JAVA options file is /etc/tomcat/tomcat.conf, example config:

JAVA_OPTS="-Xms1024m -Xmx7328m -XX:MaxPermSize=5898m -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled"

If you would like to add admin user to manage Tomcat with GUI, this is done on file /usr/share/tomcat/conf/tomcat-users.xml under section:

<tomcat-users>
...
</tomcat-users>

Example:

<tomcat-users>
    <user username="admin" password="password" roles="manager-gui,admin-gui"/>
</tomcat-users>

Installing Letsencrypt

wget https://dl.eff.org/certbot-auto -P /usr/local/bin
chmod a+x /usr/local/bin/certbot-auto

Request Letsencrypt ssl certificate for domain

firewall-cmd --add-service https --permanent
firewall-cmd --reload
certbot-auto certonly -d odk2.domain.com

SSL contents will be located under /etc/letsencrypt/live/odk2.domain.com/

create a PKCS12 that contains both your full chain and the private key

openssl pkcs12 -export -out /tmp/odk2.domain.com_fullchain_and_key.p12 \
    -in /etc/letsencrypt/live/odk2.domain.com/fullchain.pem \
    -inkey /etc/letsencrypt/live/odk2.domain.com/privkey.pem \
    -name tomcat

Convert that PKCS12 to a JKS

keytool -importkeystore \
    -deststorepass ughubieVahfaej5 -destkeypass ughubieVahfaej5 -destkeystore odk2.domain.com.jks \
    -srckeystore odk2.domain.com_fullchain_and_key.p12  -srcstoretype PKCS12 -srcstorepass ughubieVahfaej5 \
    -alias tomcat

Replace ughubieVahfaej5 with your password

Configure tomcat server

# vim /etc/tomcat/server.xml

Ensure the following section is commented out

  <!---
    <Connector port="8080" protocol="HTTP/1.1"
            connectionTimeout="20000"
            redirectPort="8443" />
    -->

Configure connector to use a shared thread pool

 <Connector executor="tomcatThreadPool"
            port="8080" protocol="HTTP/1.1"
            connectionTimeout="20000"
            redirectPort="8443" />

Next is to define SSL HTTP/1.1 Connector on port 8443

 <Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol"
            maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
            keystoreFile="/etc/ssl/odk2.domain.com.jks"
            keystorePass="ughubieVahfaej5"
            clientAuth="false" sslProtocol="TLS" />

With above configuration, http to https redirect will be done automatically for the application, which can be accessed at:

http://server_IP_address:8080

Manager App

http://server_IP_address:8080/manager/html

Bash script to Auto renew with a cron job

It can be good to set the renewal to be automated using Linux cron jobs. For this take a look at:

Bash Script to Auto-renew Letsencrypt SSL certificate on Tomcat

Kubernetes installation

 

Set up Docker

Step 1: Install Docker

Kubernetes requires an existing Docker installation. If you already have Docker installed, skip ahead to Step 2.

If you do not have Kubernetes, install it by following these steps:

1. Update the package list with the command:

sudo apt-get update

2. Next, install Docker with the command:

sudo apt-get install docker.io

3. Repeat the process on each server that will act as a node.

4. Check the installation (and version) by entering the following:

docker ––version

Step 2: Start and Enable Docker

1. Set Docker to launch at boot by entering the following:

sudo systemctl enable docker

2. Verify Docker is running:

sudo systemctl status docker

To start Docker if it’s not running:

sudo systemctl start docker

3. Repeat on all the other nodes.

 

Install Kubernetes

Step 1: Install Kubernetes

In this step, we will be installing Kubernetes. Just like you did with Docker in the prerequisites, you must run the commands in both nodes to install Kubernetes. Use ssh to login into both nodes and proceed. You will start by installing the apt-transport-httpspackage which enables working with http and https in Ubuntu’s repositories. Also, install curl as it will be necessary for the next steps. Execute the following command:

Then, add the Kubernetes signing key to both nodes by executing the command:

Next, we add the Kubernetes repository as a package source on both nodes using the following command:

After that, update the nodes:

  • Install Kubernetes tools

Once the update completes, we will install Kubernetes. This involves installing the various tools that make up Kubernetes: kubeadm, kubelet, kubectl, and kubernetes-cni. These tools are installed on both nodes. We define each tool below:

  • kubelet – an agent that runs on each node and handles communication with the master node to initiate workloads in the container runtime. Enter the following command to install kubelet:

  • kubeadm – part of the Kubernetes project and helps initialize a Kubernetes cluster. Enter the following command to install the kubeadm:

  • kubectl – the Kubernetes command-line tool that allows you to run commands inside the Kubernetes clusters. Execute the following command to install kubectl:

  • kubernetes-cni – enables networking within the containers ensuring containers can communicate and exchange data. Execute the following command to install:

Optionally, you can install all four in a single command:

Step 2: Disabling Swap Memory

Kubernetes fails to function in a system that is using swap memory. Hence, it must be disabled in the master node and all worker nodes. Execute the following command to disable swap memory:

This command disables swap memory until the system is rebooted. We have to ensure that it remains off even after reboots. This has to be done on the master and all worker nodes. We can do this by editing the fstab file and commenting out the /swapfile line with a #. Open the file with the nano text editor by entering the following command:

Inside the file, comment out the swapfile line as shown in the screenshot below:

install Kubernetes fstab swap disable

If you do not see the swapfile line, just ignore it. Save and close the file when you are done editing. Follow the same process for both nodes. Now, swap memory settings will remain off, even after your server reboots.

Step 3: Setting Unique Hostnames

Your nodes must have unique hostnames for easier identification. If you are deploying a cluster with many nodes, you can set it to identify names for your worker nodes such as node-1, node-2, etc. As we had mentioned earlier, we have named our nodes as kubernetes-master and kubernetes-worker. We have set them at the time of creating the server. However, you can adjust or set yours if you had not already done so from the command line. To adjust the hostname on the master node, run the following command:

On the worker node, run the following command:

You may close the current terminal session and ssh back into the server to see the changes.

Step 4: Letting Iptables See Bridged Traffic

For the master and worker nodes to correctly see bridged traffic, you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your config. First, ensure the br_netfilter module is loaded. You can confirm this by issuing the command:

Optionally, you can explicitly load it with the command:

Now, you can run this command to set the value to 1:

Step 5: Changing Docker Cgroup Driver

By default, Docker installs with cgroupfs as the cgroup driver. Kubernetes recommends that Docker should run with systemdas the driver. If you skip this step and try to initialize the kubeadm in the next step, you will get the following warning in your terminal:

On both master and worker nodes, update the cgroupdriver with the following commands:

Then, execute the following commands to restart and enable Docker on system boot-up:

Once that is set, we can proceed to the fun stuff, deploying the Kubernetes cluster!

Step 6: Initializing the Kubernetes Master Node

The first step in deploying a Kubernetes cluster is to fire up the master node. While on the terminal of your master node, execute the following command to initialize the kubernetes-master:

If you execute the above command and your system doesn’t match the expected requirements, such as minimum RAM or CPU as explained in the Prerequisites section, you will get a warning and the cluster will not start:

install Kubernetes InitError

Note: If you are building for production, it’s a good idea to always meet the minimum requirements for Kubernetes to run smoothly. However, if you are doing this tutorial for learning purposes, then you can add the following flag to the kubeadm init command to ignore the error warnings:
sudo kubeadm init --ignore-preflight-errors=NumCPU,Mem --pod-network-cidr=10.244.0.0/16

The screenshot below shows that the initialization was successful. We have also added a flag to specify the pod network with the IP 10.244.0.0, It’s the default IP that the kube-flannel uses. We will discuss more on the pod network in the next step.

install Kubernetes Kubeadm Init

In the output, you can see the kubeadm join command (we’ve hidden our IP address) and a unique token that you will run on the worker node and all other worker nodes that you want to join onto this cluster. Next, copy-paste this command as you will use it later in the worker node.

In the output, Kubernetes also displays some additional commands that you should run as a regular user on the master node before you start to use the cluster. Let’s run these commands:

We have now initialized the master node. However, we also have to set up the pod network on the master node before we join the worker nodes.

Step 7: Deploying a Pod Network

A pod network facilitates communication between servers and it’s necessary for the proper functioning of the Kubernetes cluster. You can read more about Kubernetes Cluster Networking from the official docs. We will be using the Flannel pod network for this tutorial. Flannel is a simple overlay network that satisfies the Kubernetes requirements.

Before we deploy the pod network, we need to check on the firewall status. If you have enabled the firewall after following step 5 of the tutorial on setting up your Ubuntu server, you must first add a firewall rule to create exceptions for port 6443 (the default port for Kubernetes). Run the following ufw commands on both master and worker nodes:

After that, you can run the following two commands to deploy the pod network on the master node:

This may take a couple of seconds to a minute depending on your environment to load up the flannel network. Run the following command to confirm that everything is fired up:

The output of the command should show all services status as running if everything was successful:

install Kubernetes Pod Status

You can also view the health of the components using the get component status command:

install Kubernetes Component Status

This command has a short form cs:

Component Status Short

If you see the unhealthy status, modify the following files and delete the line at (spec->containers->command) containing this phrase ---port=0 :

Do the same for this file:

Finally, restart the Kubernetes service:

Step 8: Joining Worker Nodes to the Kubernetes Cluster

With the kubernetes-master node up and the pod network ready, we can join our worker nodes to the cluster. In this tutorial, we only have one worker node, so we will be working with that. If you have more worker nodes, you can always follow the same steps as we will explain below to join the cluster.

First, log into your worker node on a separate terminal session. You will use your kubeadm join command that was shown in your terminal when we initialized the master node in Step 6. Execute the command:

You should see similar output like the screenshot below when it completes joining the cluster:

Worker Join

Once the joining process completes, switch the master node terminal and execute the following command to confirm that your worker node has joined the cluster:

In the screenshot from the output of the command above, we can see that the worker node has joined the cluster:

install Kubernetes K8S Node Status

Step 9: Deploying an Application to the Kubernetes Cluster

At this point, you have successfully set up a Kubernetes cluster. Let’s make the cluster usable by deploying a service to it. Nginx is a popular web server boasting incredible speeds even with thousands of connections. We will deploy the Nginx webserver to the cluster to prove that you can use this setup in a real-life application.

Execute the following command on the master node to create a Kubernetes deployment for Nginx:

You can view the created deployment by using the describe deployment command:

Nginx Deployment

To make the nginx service accessible via the internet, run the following command:

NodePort Svc Create

The command above will create a public-facing service for the Nginx deployment. This being a nodeport deployment, Kubernetes assigns the service a port in the range of 32000+.

You can get the current services by issuing the command:

NodePort Svc Status

You can see that our assigned port is 32264. Take note of the port displayed in your terminal to use in the next step.

To verify that the Nginx service deployment is successful, issue a curl call to the worker node from the master. Replace your worker node IP and the port you got from the above command:

You should see the output of the default Nginx index.html:

Curl Nginx Svc

Optionally, you can visit the worker node IP address and port combination in your browser and view the default Nginx index page:

install Kubernetes Nginx Webpage

You can delete a deployment by specifying the name of the deployment. For example, this command will delete our deployment:

We have now successfully tested our cluster!

Conclusion

In this tutorial, you have learned how to install a Kubernetes cluster on Ubuntu 20.04. You set up a cluster consisting of a master and worker node. You were able to install the Kubernetes toolset, created a pod network, and joined the worker node to the master node. We also tested our concept by doing a basic deployment of an Nginx webserver to the cluster. This should work as a foundation to working with Kubernetes clusters on Ubuntu.

While we only used one worker node, you can extend your cluster with as many nodes as you wish. If you would like to get deeper into DevOps with automation tools like Ansible, we have a tutorial that delves into provisioning Kubernetes cluster deployments with Ansible and Kubeadm, check it out. If you want to learn how to deploy a PHP application on a Kubernetes cluster check this tutorial.

Happy Computing!