Quantcast
Channel: Severalnines - ClusterControl
Viewing all 195 articles
Browse latest View live

ClusterControl Makes Galera Cluster Enterprise-grade for Black Hills Corp

$
0
0

Severalnines is excited to announce it’s newest customer Black Hills Corp, an energy provider operating in the United States.  

In the case study, you can learn how Black Hills utilized ClusterControl to migrate their open source databases to Galera Cluster to operate alongside their Oracle and Microsoft SQL Server database servers.

Requiring enterprise-grade management and monitoring, Black Hills selected Severalnines due to our expertise with the technology, top-level support, and feature-rich product to help with the ongoing management.

Read the case study to learn more.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

About ClusterControl

ClusterControl is the all-inclusive open source database management system for users with mixed environments that removes the need for multiple management tools. ClusterControl provides advanced deployment, management, monitoring, and scaling functionality to get your MySQL, MongoDB, and PostgreSQL databases up-and-running using proven methodologies that you can depend on to work. At the core of ClusterControl is it’s automation functionality that let’s you automate many of the database tasks you have to perform regularly, like deploying new databases, detecting anomalies, recovering nodes from failures, adding and scaling new nodes, running backups and upgrades, and more.

About Severalnines

Severalnines provides automation and management software for database clusters. We help companies deploy their databases in any environment, and manage all operational aspects to achieve high-scale availability.

Severalnines' products are used by developers and administrators of all skills levels to provide the full 'deploy, manage, monitor, scale' database cycle, thus freeing them from the complexity and learning curves that are typically associated with highly available database clusters. Severalnines is often called the “anti-startup” as it is entirely self-funded by its founders. The company has enabled over 12,000 deployments to date via its popular product ClusterControl. Currently counting BT, Orange, Cisco, CNRS, Technicolor, AVG, Ping Identity and Paytrail as customers. Severalnines is a private company headquartered in Stockholm, Sweden with offices in Singapore, Japan and the United States.


How ClusterControl Enables Financial Technology

$
0
0

Open source database technology has matured to the point where it is now being used widely in financial, eCommerce, and payment processing applications.  

Furthermore, FinTech applications are held to very high standards when it comes to security, governance, compliance and data integrity.

With ClusterControl you benefit from the innovation, performance and cost savings that come with open source technology, while maintaining the enterprise-grade quality and dependability you expect from a comprehensive database management system.

ClusterControl offers a wide array of features to ensure your data management needs are met, including:

  • Deployment Automation - ClusterControl allows you to deploy the top 3 open source databases in the world (MySQL/MariaDB, MongoDB and PostgreSQL), with setups ranging from single-instance to master-slave replication, and all the way to shared-nothing clusters running on commodity hardware.
  • Advanced Security - ClusterControl supports secure encrypted connections between nodes using SSL protocol. Both the connections between database clients and servers as well as replication traffic particular to the database itself are supported through standard SSL protocol
  • User Management - ClusterControl provides you with advanced user management so you can know and report on who in your organisation uses your databases and how.
  • Automated Failover and Recovery - Advanced automated failover and verified backup technology in ClusterControl ensure your mission critical applications achieve high availability with zero downtime.
  • Operational Reporting - When you need to show you are meeting your SLAs comprehensive  operational reports help you keep track of the historical data of your database.
  • Monitoring & Alerts -  Advanced monitoring and reporting let you easily keep a close eye on the performance of your environment, by providing complex metrics to analyze challenges and predict future needs.
  • Multi-Data & Cross-Regional Support - ClusterControl supports environments across multiple datacenters and regions, keeping even the most complex environments highly available.
  • Load Balancing Technology - Achieve high availability using the most advanced load balancing technologies supported by ClusterControl.

ClusterControl also offers you everything you need to ensure you are meeting your industry’s compliance standards to secure your database operations. Segregation of duties, comprehensive reporting and automated functionality afford you the ability to rest assured your deployments are secure, internally accessible and easily managed from a single interface.

Learn more about ClusterControl for Financial Technology and read what our customers are saying about their experience using ClusterControl.

Video: ChatOps, Monitoring, & Orchestration Integrations for ClusterControl

$
0
0

We were very excited to release a new wave of ChatOps and Notification System integrations in the last version of ClusterControl.  But these systems are not the only ones that you can integrate with ClusterControl.

This video details the different type of integrations that are possible when you use ClusterControl to deploy, manage, monitor and scale your open source databases.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

ClusterControl Integrations

ClusterControl offers direct connections for many popular incident communication services that allow to customize how you are alerted from ClusterControl when something goes wrong with your database environments. Until recently it was only possible to propagate (critical) alerts and warnings to other systems, using our Notification Services framework.

In ClusterControl 1.4.2 we have overhauled this functionality and replaced it with ClusterControl Integrations. ClusterControl Integrations do not require any modifying of external files. In effect the integration with applications and services has now become so easy, that on average you can add a new integration in less than one minute!

ClusterControl can integrate with many different types of services like...

  • Notification Services - PagerDuty, VictorOps and OpsGenie
  • ChatOps - Slack, Telegram, Slack, Flowdock, Hipchat and Campfire
  • Orchestration - Chef, Puppet, Ansible
  • Monitoring - Nagios, Zabbix

Learn more about how ClusterControl integrates with your favorite tools you use here.

An Executive's Guide to Database Management ROI - New Whitepaper

$
0
0

We’re happy to announce that our new whitepaper An Executive’s Guide to Database Management ROI is now available to download for free!

This guide discusses the options available to IT leaders when bringing in open source databases into their environments as well as general information on the open source database market. Also included in this whitepaper is an analysis of the costs of performing and omitting essential tasks typically associated with managing open source databases.

Topics included in this whitepaper are…

The whitepaper also discusses how ClusterControl fits into the open source equation and how, in many cases, is more cost-effective than cobbling together multiple point solutions. Throughout the document are quotes from actual Severalnines clients who have implemented open source database technology with ClusterControl.

In the end, the key to maintaining high performance and achieving high availability must be both cost-effective and technical capable of delivering the results need for the success of the application.

If your organization has or is exploring incorporating open source database technology, this whitepaper will help you better understand the options.

Severalnines Enables Root Level Tech Customers to Achieve High Availability & Combat DDOS

$
0
0

Severalnines is excited to announce its newest customer Root Level Technology, a dedicated and co-location hosting provider in the United States.

Root Level Tech is a fast growing company with multiple data centers across the country.  The company builds custom solutions with geographic redundancy, high availability and protection from Distributed Denial of Service (DDoS) attacks.

In this case study, you’ll learn how, after experiencing issues with one its customers, Root Level Tech contacted Severalnines to partner in developing an open source database management solution for MySQL that would provide them with deployments they could count on coupled with monitoring and management features to prevent downtime, ensure high-availability, and help protect their customers data.

Read the case study to learn more.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

About ClusterControl

ClusterControl is the all-inclusive open source database management system for users with mixed environments that removes the need for multiple management tools. ClusterControl provides advanced deployment, management, monitoring, and scaling functionality to get your MySQL, MongoDB, and PostgreSQL databases up-and-running using proven methodologies that you can depend on to work. At the core of ClusterControl is it’s automation functionality that let’s you automate many of the database tasks you have to perform regularly, like deploying new databases, detecting anomalies, recovering nodes from failures, adding and scaling new nodes, running backups and upgrades, and more.

About Severalnines

Severalnines provides automation and management software for database clusters. We help companies deploy their databases in any environment, and manage all operational aspects to achieve high-scale availability.

Severalnines' products are used by developers and administrators of all skills levels to provide the full 'deploy, manage, monitor, scale' database cycle, thus freeing them from the complexity and learning curves that are typically associated with highly available database clusters. Severalnines is often called the “anti-startup” as it is entirely self-funded by its founders. The company has enabled over 12,000 deployments to date via its popular product ClusterControl. Currently counting BT, Orange, Cisco, CNRS, Technicolor, AVG, Ping Identity and Paytrail as customers. Severalnines is a private company headquartered in Stockholm, Sweden with offices in Singapore, Japan and the United States.

ClusterControl Helps Mediacloud Hit Cloud “9’s”

$
0
0

Severalnines is excited to announce it’s newest customer Mediacloud, a Spain-based, cloud digital infrastructure provider.

Mediacloud is a sub-division of Mediapro, one of the largest multimedia communications groups in Europe, and manages all of its data center operations. As part of these duties it manages cloud operations across six worldwide data centers.

In the case study, you can learn how Mediacloud overcame database consistency issues to achieve a multi-datacenter, highly-available solution using Galera Cluster.  This allowed MediaCloud to provide more than 150 of its clients several nines uptime :-) for their mission-critical applications.

Read the case study to learn more.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

About ClusterControl

ClusterControl is the all-inclusive open source database management system for users with mixed environments that removes the need for multiple management tools. ClusterControl provides advanced deployment, management, monitoring, and scaling functionality to get your MySQL, MongoDB, and PostgreSQL databases up-and-running using proven methodologies that you can depend on to work. At the core of ClusterControl is it’s automation functionality that let’s you automate many of the database tasks you have to perform regularly, like deploying new databases, detecting anomalies, recovering nodes from failures, adding and scaling new nodes, running backups and upgrades, and more.

About Severalnines

Severalnines provides automation and management software for database clusters. We help companies deploy their databases in any environment, and manage all operational aspects to achieve high-scale availability.

Severalnines' products are used by developers and administrators of all skills levels to provide the full 'deploy, manage, monitor, scale' database cycle, thus freeing them from the complexity and learning curves that are typically associated with highly available database clusters. Severalnines is often called the “anti-startup” as it is entirely self-funded by its founders. The company has enabled over 12,000 deployments to date via its popular product ClusterControl. Currently counting BT, Orange, Cisco, CNRS, Technicolor, AVG, Ping Identity and Paytrail as customers. Severalnines is a private company headquartered in Stockholm, Sweden with offices in Singapore, Japan and the United States.

Manage and Automate Galera Cluster - Why ClusterControl

$
0
0

Galera Cluster by Codership is a synchronous multi-master replication technology which can be utilized to build highly available MySQL or MariaDB clusters.

It has been downloaded over one million times since last year, establishing itself as one of the most popular high availability and scalability technologies for MySQL, MariaDB and Percona Server with database users worldwide.

And while Galera Cluster is easy enough to deploy, it is complex to operate. To properly automate and manage it does require a sound understanding of how it works and how it behaves in production. For instance, once it’s deployed, how does it behave under a real-life workload, scale, and during long term operations?

This is where monitoring performance and optimizing it, understanding anomalies, recovering from failures, managing schema and configuration changes and pushing them in production, version upgrades and performing backups come into play.

There are a number of things you’d want to have thought through and be in control of before going in production with Galera Cluster for MySQL or MariaDB:

  • Hardware and network requirements
  • OS tuning
  • Sane configuration settings for the database
  • Production-grade deployment
  • Security
  • Monitoring and alerting
  • Query performance
  • Anomaly detection and troubleshooting
  • Recovering from failures
  • Schema changes
  • Backup strategies and disaster recovery
  • Disaster recovery
  • Reporting and analytics
  • Capacity planning

And the list goes on ...

We saw great potential in Galera Cluster early on, and started building a deployment and management product for it even before the first 1.0 version was released. We are happy to see that the technology has delivered on its promises - high availability of MySQL with good write scalability. Over the years, we have been able to build out comprehensive management procedures in ClusterControl and battle-test these across thousands of installations.

Not everyone has the knowledge, skills, time or resources to manage a high availability database. It is hard enough to find a production DBA, or a DevOps person with strong database knowledge. So imagine that most of the relevant steps in that process could be automated and managed from one central system?

This is where ClusterControl comes in.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

ClusterControl is our all-inclusive database management system that lets you easily deploy, monitor, manage and scale highly available open source databases on-premise or in the cloud.

So Why Use ClusterControl for Galera Cluster?

Deploying a production ready Galera Cluster has become a matter of a few clicks for ClusterControl users worldwide. And with tens of thousands of deployments to date, it’s safe to say that ClusterControl is truly ‘Galera battle-tested’. We’ve included years of industry best practices into the product to help companies automate and manage their database operations as smoothly as possible.

Some of the key benefits of using ClusterControl with Galera Cluster include:

  • Maximum efficiency: automated failure detection, failover, and automatic recovery of individual nodes or even entire clusters to achieve
  • Pro-active intelligence: gain access to advanced monitoring features that give you insights into your database performance and alert you to any problems right away
  • Advanced security: ClusterControl provides an array of advanced security features that you can depend on to keep your data safe

One of our most trusted users put it this way:

“In Severalnines we found a partner that is much more than a perfect database management system provider with ClusterControl: we have a partner that helps us define the architectures of our LAMP projects and leverage the capabilities of Galera Cluster.”

- Olivier Lenormand, Technical Manager, CNRS/DSI.

Customers include Cisco, British Telecom, Orange, Ping Identity, Cisco, Liberty Global, AVG and many others.

The following are some of the key features to be found in ClusterControl for Galera Cluster:

  • Deploy Database Clusters
  • Configuration Management
  • Full stack monitoring (DB/LB/Host)
  • Query Monitoring
  • Anomaly detection
  • Failure detection and automatic recovery/repair
  • Add Node, Load Balancer (HAProxy, ProxySQL, MaxScale) or asynchronous replication slave
  • Backup Management
  • Encryption of data in transit
  • Online rolling upgrades
  • Developer Studio with Advisors

For a general introduction to ClusterControl, view the following video:

And for a demonstration of the ClusterControl features for Galera Cluster, view the following demo video:

To summarise, working seamlessly with your Galera setup, ClusterControl provides an integrated monitoring and troubleshooting approach, speeding up problem resolutions. A single interface saves you time by not having to cobble together configuration management tools, monitoring tools, scripts, etc. to operate your databases. And you can maximize efficiency and reduce database downtime with battle-tested automated recovery features.

Finally, ClusterControl fully supports all three Galera Cluster flavours, so you can easily deploy different clusters and compare them yourself with your own workload, on your own hardware. Do give it a try.

ClusterControl in the Cloud - All Our Resources

$
0
0

While many of our customers utilize ClusterControl on-premise to automate and manage their open source databases, several are deploying ClusterControl alongside their applications in the cloud. Utilizing the cloud allows your business and applications to benefit from the cost-savings and flexibility that come with cloud computing. In addition you don’t have to worry about purchasing, maintaining and upgrading equipment.

Along the same lines, ClusterControl offers a suite of database automation and management functions to give you full control of your database infrastructure. With it you can deploy, manage, monitor and scale your databases, securely and with ease through our point-and-click interface.

As the load on your application increases… Your cloud environment can be expanded to provide more computing power to handle that load. In much the same way ClusterControl utilizes state-of-the-art database, caching, and load balancing technologies that enable you to scale-out the load on your databases and spread that load evenly across nodes.

These performance benefits are just some of the many reasons to leverage ClusterControl to manage your open source database instances in the cloud. From advanced monitoring to backups and automatic failover, ClusterControl is your true end-to-end database management solution.

Below you will find some of our top resources to help you get your databases up-and-running in the cloud…

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

AWS Marketplace

ClusterControl on the AWS Marketplace

Want to install ClusterControl directly onto your AWS EC2 instance? Check us out on the Amazon Marketplace.
(New version coming soon!)

Install Today

Top Blogs

Migrating MySQL database from Amazon RDS to DigitalOcean

This blog post describes the migration process from EC2 instance to a DigitalOcean droplet

Read More

MySQL in the Cloud - Online Migration from Amazon RDS to EC2 Instance (PART ONE)

RDS for MySQL is easy to get started. It's a convenient way to deploy and use MySQL, without having to worry about any operational overhead. The tradeoff though is reduced control.

Read More

MySQL in the Cloud - Online Migration from Amazon RDS to Your Own Server (PART TWO)

It's challenging to move data out of RDS for MySQL. We will show you how to do the actual migration of data to your own server, and redirect your applications to the new database without downtime.

Read More

MySQL in the Cloud - Pros and Cons of Amazon RDS

Moving your data into a public cloud service is a big decision. All the major cloud vendors offer cloud database services, with Amazon RDS for MySQL being probably the most popular. In this blog, we’ll have a close look at what it is, how it works, and compare its pros and cons.

Read More

About Cloud Lock-in and Open Source Databases

Severalnines CEO Vinay Joosery discusses key considerations to take when choosing cloud providers to host and manage mission critical data; and thus avoid cloud lock-in.

Read More

Infrastructure Automation - Deploying ClusterControl and MySQL-based systems on AWS using Ansible

This blog post has the latest updates to our ClusterControl Ansible Role. It now supports automatic deployment of MySQL-based systems (MySQL Replication, Galera Cluster, NDB Cluster).

Read More

Leveraging AWS tools to speed up management of Galera Cluster on Amazon Cloud

We previously covered basic tuning and configuration best practices for MyQL Galera Cluster on AWS. In this blog post, we’ll go over some AWS features/tools that you may find useful when managing Galera on Amazon Cloud. This won’t be a detailed how-to guide as each tool described below would warrant its own blog post. But this should be a good overview of how you can use the AWS tools at your disposal.

Read More

5 Performance tips for running Galera Cluster for MySQL or MariaDB on AWS Cloud

Amazon Web Services is one of the most popular cloud environments. Galera Cluster is one of the most popular MySQL clustering solutions. This is exactly why you’ll see many Galera clusters running on EC2 instances. In this blog post, we’ll go over five performance tips that you need to take under consideration while deploying and running Galera Cluster on EC2.

Read More

How to change AWS instance sizes for your Galera Cluster and optimize performance

Running your database cluster on AWS is a great way to adapt to changing workloads by adding/removing instances, or by scaling up/down each instance. At Severalnines, we talk much more about scale-out than scale up, but there are cases where you might want to scale up an instance instead of scaling out.

Read More


How to Deploy ClusterControl on AWS to Manage your Cloud Database

$
0
0

ClusterControl is infrastructure-agnostic - it can be used in your own datacenter on physical hosts, as well as in virtualized cloud environments. All you need is ssh access from the ClusterControl host to the database nodes, and you can then deploy standalone/replicated/clustered MySQL/MariaDB, MongoDB (replica sets or sharded clusters) or PostgreSQL (streaming replication). In this blog post, we will walk you through the steps to deploy ClusterControl on EC2.

Setting up instances in EC2

The hardware requirements for ClusterControl are described here. Those are meant to create a performant platform for the ClusterControl server. Having said that, we will use a small instance for our testing purposes (t2.micro) - it should be enough for us.

First, we need to pick an AMI. ClusterControl supports:

  • Redhat/CentOS/Oracle Linux 6 and later
  • Ubuntu 12.04/14.04/16.04 LTS
  • Debian 7.0 and later

We are going to use Ubuntu 16.04.

Next step - instance type. We will pick t2.micro for now, although you will want to use larger instances for production setups. For other cloud providers, pick instances with at least 1 GB of memory.

We are going to deploy four instances at once, one for ClusterControl and three for Percona XtraDB Cluster. You need to decide where those instances should be deployed (VPC or not, which subnet etc). For our testing purposes, we are going to use a VPC and a single subnet. Of course, deploying nodes across the subnets (Availability Zones) makes your setup more likely to survive if one of the AZ’s would become unavailable.

For storage we’ll use 100GB of general purpose SSD volume (GP2). This should be enough to perform some tests with a reasonable volume of data.

Next - security groups. SSH access is a requirement. Other than that, you need to open ports required by the database you plan to deploy. You can find more information on which ports are required in our support portal.

Finally, you need to either pick one of the existing key pairs or you can create a new one. After this step your instances will be launched.

Once the instances are up and running, it’s time to install ClusterControl. For that, log into one of the instances and download the ClusterControl installation script, install-cc:

ubuntu@ip-172-30-4-20:~$ wget http://www.severalnines.com/downloads/cmon/install-cc
--2017-09-06 11:13:10--  http://www.severalnines.com/downloads/cmon/install-cc
Resolving www.severalnines.com (www.severalnines.com)... 107.154.146.155
Connecting to www.severalnines.com (www.severalnines.com)|107.154.146.155|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://www.severalnines.com/downloads/cmon/install-cc [following]
--2017-09-06 11:13:10--  https://www.severalnines.com/downloads/cmon/install-cc
Connecting to www.severalnines.com (www.severalnines.com)|107.154.146.155|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://severalnines.com/downloads/cmon/install-cc [following]
--2017-09-06 11:13:11--  https://severalnines.com/downloads/cmon/install-cc
Resolving severalnines.com (severalnines.com)... 107.154.238.155, 107.154.148.155
Connecting to severalnines.com (severalnines.com)|107.154.238.155|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 56913 (56K) [text/plain]
Saving to: ‘install-cc’

install-cc                                           100%[=====================================================================================================================>]  55.58K   289KB/s    in 0.2s

2017-09-06 11:13:12 (289 KB/s) - ‘install-cc’ saved [56913/56913]

Then, make sure it can be executed before running it:

ubuntu@ip-172-30-4-20:~$ chmod +x install-cc
ubuntu@ip-172-30-4-20:~$ sudo ./install-cc

At the beginning, you’ll get some information about the requirements on supported Linux distributions:

!!
Only RHEL/Centos 6.x|7.x, Debian 7.x|8.x, Ubuntu 12.04.x|14.04.x|16.04.x LTS versions are supported
Minimum system requirements: 2GB+ RAM, 2+ CPU cores
Server Memory: 990M total, 622M free
MySQL innodb_buffer_pool_size set to 512M

Severalnines would like your help improving our installation process.
Information such as OS, memory and install success helps us improve how we onboard our users.
None of the collected information identifies you personally.
!!
=> Would you like to help us by sending diagnostics data for the installation? (Y/n):

This script will add Severalnines repository server for deb and rpm packages and
install the ClusterControl Web Applicaiton and Controller.
An Apache and MySQL server will also be installed. An existing MySQL Server on this host can be used.

At some point you will have to answer some questions about hostnames, ports and passwords.

=> The Controller hostname will be set to 172.30.4.20. Do you want to change it? (y/N):
=> Creating temporary staging dir s9s_tmp

=> Setting up the ClusterControl Web Application ...
=> Using web document root /var/www/html
=> No running MySQL server detected
=> Installing the default distro MySQL Server ...
=> Assuming default MySQL port is 3306. Do you want to change it? (y/N):

=> Enter the MySQL root password:
=> Enter the MySQL root password again:
=> Importing the Web Application DB schema and creating the cmon user.

=> Importing /var/www/html/clustercontrol/sql/dc-schema.sql
mysql: [Warning] Using a password on the command line interface can be insecure.
=> Set a password for ClusterControl's MySQL user (cmon) [cmon]
=> Enter a CMON user password:
=> Enter the CMON user password again: => Creating the MySQL cmon user ...
mysql: [Warning] Using a password on the command line interface can be insecure.
=> Creating UI configuration ...

Finally, you’ll get the confirmation that ClusterControl has been installed. Install script will also attempt to detect your public IP and print out a link that can be used in your browser to access ClusterControl.

=> ClusterControl installation completed!
Open your web browser to http://172.30.4.20/clustercontrol and
enter an email address and new password for the default Admin User.

Determining network interfaces. This may take a couple of minutes. Do NOT press any key.
Public/external IP => http://34.230.71.40/clustercontrol
Installation successful. If you want to uninstall ClusterControl then run install-cc --uninstall.

Once the installation is done, there’s still one thing to take care of - SSH access from ClusterControl to the remaining hosts. Unless you already have SSH access between nodes (and you can use ssh-copy-id), this will be a manual process. First and the foremost, we need to generate a new ssh key:

root@ip-172-30-4-20:~# ssh-keygen -C 'galera_cluster' -f id_rsa_galera -t rsa -b 4096
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in id_rsa_galera.
Your public key has been saved in id_rsa_galera.pub.
The key fingerprint is:
SHA256:2tWOGXbrtc0Qh45NhNPzUSVdDzE9ANV1TJ0QBE5QrQY galera_cluster
The key's randomart image is:
+---[RSA 4096]----+
|         .o=**X*&|
|         Eo  + BO|
|          ..+ +.o|
|           + o +.|
|        S = o + o|
|       o o * * o |
|      . . o + =  |
|           . . = |
|            . . o|
+----[SHA256]-----+
root@ip-172-30-4-20:~#

We can verify it has been created correctly. You will also want to copy the contents of the public key - we will use it to create its copies on remaining nodes.

root@ip-172-30-4-20:~# cat id_rsa_galera.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDYKil17MzTrNc70GIQlVoK6xLop9acfT3W6kBUGO4ybsvIA5Fss+WvT/DLsYMtukq2Eih93eO4amLRYQIeyWSjJ/bBwIF/LXL4v04GF8+wbDgCyiV/t9dSuXna9qFeawkUVcPjnmWnZqUoaP5QeovXTluxl54xEwbFm1oLT+fgWbaim5w9vVUK4+hAHaZ7wVvTPVsIt1B3nJgWJF0Sz+TJN87vSUg7xdshgzhapUotXlguFGVzmNKWLnEFDCK7RT41oh4y4rkKP7YLc+wFfRHYTnKyMIcf0/0VMyL+2AdwQp8RThbBommf2HGimF1hSyA9/fc+tLi7FVTg1bKKeXj4hwexeFAJZwoy3HyD3wQ/NwadpDVk5Pg7YYzdN2aCZfvo27qp3gdQQ2H+LF6LvDyQEkgRpFN+pHoWQvPjJJasjfIcfdaC9WmDiL4s5fXyCTQz/x0NaTXVkLBS9ibfOUw8AGdd36FvdqnNOFOlMLKLa359JhdpqXnH7ksiThcotQuFmV5Dc8M66vTDz9rvVZhNC0nME478RNBP0Bgj1BM26XdQlzozeaRmHGoZXcSQVJTXBC93+QN4+bRmWmxhhj5G5M7bFiQyal1VtugoUt8ZV4NiiG+KDd6yj5um8+CffD/BASGrv3vffH+AK7xtjchIv5su40+unecfSOtO98TiQw== galera_cluster

Now, on every remaining node, you need to add this public key into the authorized_keys file. For ubuntu, you may want to clean its contents first if you want to use a root login. By default, only ubuntu user can be used to connect through SSH. Such setup (regular user and sudo) is also possible to use with ClusterControl but here we’ll go for a root user.

root@ip-172-30-4-198:~# vim ~/.ssh/authorized_keys

Once authorized_keys files on all nodes contain our public key, we will copy our public key to .ssh directory and make necessary changes in access rights:

root@ip-172-30-4-20:~# cp id_rsa_galera /root/.ssh/
root@ip-172-30-4-20:~# chmod 600 /root/.ssh/id_rsa_galera

Now we can test if SSH access works as expected:

root@ip-172-30-4-20:~# ssh -i /root/.ssh/id_rsa_galera 172.30.4.46
Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-1022-aws x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.


root@ip-172-30-4-46:~# logout
Connection to 172.30.4.46 closed.

All’s good. It’s time to configure ClusterControl.

Fill in some registration details.

Once you have logged in, a wizard will appear with an option to either deploy new cluster or import an existing one.

We want to deploy Percona XtraDB Cluster so we’ll go for “Deploy Database Cluster” and pick the “MySQL Galera” tab. Here we have to fill in access details required for SSH connectivity. We’ll set SSH User to root and we will fill in the path to our SSH key.

Next, we’ll define a vendor, version, password and IP addresses for our database hosts. Please keep in mind that ClusterControl will check SSH connectivity to the target database hosts. If everything works ok, you’ll see green ticks. If you see that SSH authentication failed, then you will need to investigate as the ClusterControl server is not able to access your database hosts.

Then, click on Deploy to start the deployment process.

You can track the deployment progress in the activity monitor.

Remember that deployment is only the first step. Operating a database requires you to monitor performance of your hosts, database instances, queries, and manage backups, fix failures and other anomalies, manage proxies, upgrades, etc. ClusterControl can manage all these aspects for you, so do give it a try and let us know how you get on.

Australia’s Top Hosting Provider Leverages ClusterControl to Deliver World-Class Experience for their Users

$
0
0

Severalnines is excited to announce it’s newest customer VentraIP, an Australian-based web hosting, domain names, and SSL Certificate provider.

VentraIP Australia is the largest privately owned web host and domain name registrar in Australia, backed by a team of industry veterans and local technical professionals who ensure their 150,000 customers always get the best customer service and technical support 24 hours a day, 7 days a week.

In the case study, you can learn how VentraIP went from using a ill-performing MySQL standalone instance to power their front end and replaced it with Galera Cluster using ClusterControl to deliver a high-performance and redundant system that met their customers needs and the needs of the future.  

Read the case study to learn more.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

About ClusterControl

ClusterControl is the all-inclusive open source database management system for users with mixed environments that removes the need for multiple management tools. ClusterControl provides advanced deployment, management, monitoring, and scaling functionality to get your MySQL, MongoDB, and PostgreSQL databases up-and-running using proven methodologies that you can depend on to work. At the core of ClusterControl is it’s automation functionality that let’s you automate many of the database tasks you have to perform regularly, like deploying new databases, detecting anomalies, recovering nodes from failures, adding and scaling new nodes, running backups and upgrades, and more.

About Severalnines

Severalnines provides automation and management software for database clusters. We help companies deploy their databases in any environment, and manage all operational aspects to achieve high-scale availability.

Severalnines' products are used by developers and administrators of all skills levels to provide the full 'deploy, manage, monitor, scale' database cycle, thus freeing them from the complexity and learning curves that are typically associated with highly available database clusters. Severalnines is often called the “anti-startup” as it is entirely self-funded by its founders. The company has enabled over 12,000 deployments to date via its popular product ClusterControl. Currently counting BT, Orange, Cisco, CNRS, Technicolor, AVG, Ping Identity and Paytrail as customers. Severalnines is a private company headquartered in Stockholm, Sweden with offices in Singapore, Japan and the United States.

Considerations When Running ClusterControl in the Cloud

$
0
0

Cloud computing is booming and the cost to rent a server, or even an entire datacenter has been going down in the last couple of years. There are many cloud providers to choose from, and most of them provide pay-per-use business models. These benefits are tempting enterprises to move away from traditional on-premise bare-metal infrastructure to a more flexible cloud infrastructure as the latter supports their business growth more effectively.

ClusterControl can be used to automate the management of databases in the cloud.In this blog post, we are going to look into a number of considerations that users tend to overlook when running ClusterControl in the cloud. Some of them can be seen as limitations of ClusterControl, due to the nature of the cloud environment itself.

IP Address and Networking

There a couple of things to look out for:

  • ClusterControl have limited capabilities in provisioning IPv6 hosts. When deploying a database server or cluster, please use IPv4 whenever possible.
  • ClusterControl must be able to reach the database host directly, either by using the FQDN, public IP address or internal IP address.
  • ClusterControl does not support the bastion host approach to access the monitored hosts. Bastion host is a common setup in the cloud, whose purpose is to provide access to a private network from an external network. This reduces the risk of penetration from the external network. You can think of it as a gateway between an inside network and an outside network.
  • ClusterControl relies on sticky hostname or IP address. Changing IP address of the monitored hosts is not straightforward in ClusterControl. Try to use a dedicated IP address whenever possible. Otherwise, use a dedicated FQDN.

Since version 1.4.1, ClusterControl is able to manage database instances that runs with multiple network interfaces per host. This is a pretty common setup in the cloud, where a cloud instance is usually assigned with an internal IP address which maps to the outside world via Network Address Translation (NAT) with a public IP address. In the pre-1.4.1 version, ClusterControl could only use the public IP address to monitor the database instances from outside, and it would deploy the database cluster based on the public IP address. ClusterControl now tackles this limitation by separating IP address connectivity per host into two types - management and data address.

When deploying a new cluster or importing an existing cluster, ClusterControl will perform sanity checks once you have entered the IP address. During this check, ClusterControl will detect the operating system, collect host stats and network interfaces of the host. You can then choose which IP address that you would like to use as the management and data IP address. The management interface is used by ClusterControl operations, while the other interface is solely for database cluster communications.

The following is an example on how you can choose the data IP address during deployment:

Once the database cluster is deployed, you should see both IP addresses represented under the host, in the Nodes section. Keep in mind that having the ClusterControl host close to the monitored database server, e.g. in the same network, is always a better option.

Identity Management

ClusterControl only runs on Linux (see here for supported OS platforms). It uses SSH as the main communication channel to provision hosts. SSH offers simple implementation, strong authentication with secure encrypted data communication and is the number one choice used by administrators to control systems remotely. ClusterControl uses key-based SSH to connect to each of the monitored hosts, to perform management and monitoring tasks using a sudo user or root. In fact, the first and foremost step that you need to do after installing ClusterControl is to set up passwordless SSH authentication to all target database hosts (see the Getting Started page).

When launching a new Linux-based cloud instance, the instance is usually installed with an SSH server. The cloud provider usually generates a new private key, or associates an existing private key into the created instance for the sudo user (or root user) so the person who holds the private key can authenticate and access the server using an SSH client. Some cloud providers offer password-based SSH and will generate a random password for the same purpose.

If you already have a private key associated with the instance, you can use the same key as the SSH identity in ClusterControl to provision other cloud instances directly. It’s not necessary for you to setup a passwordless SSH key for the new nodes, since all of them are associated with the very same private key. If you’re on AWS EC2 for example, upload the SSH key to the ClusterControl host:

$ scp mykey.pem ubuntu@54.128.10.104:~

Then assign the correct ownership or permission on ClusterControl host:

$ chown ubuntu.ubuntu /home/ubuntu/mykey.pem
$ chmod 400 /home/ubuntu/mykey.pem

Then, when being asked by ClusterControl to specify the SSH user and its associated SSH key file, specify the following:

Otherwise, if you don’t have an SSH key, generate a new one and copy them to all database nodes that you would like to manage under ClusterControl:

$ whoami
ubuntu
$ ssh-keygen -t rsa # press Enter for all prompts
$ ssh-copy-id ubuntu@{target host} # repeat this for each target host

ClusterControl known limitations in this area:

  • SSH key with passphrase is not supported.
  • When using a user other than root, ClusterControl will default to connect through SSH with pseudo-terminal (tty). This would usually cause wtmp log file to grow quickly. See Setup wtmp Log Rotation for details.

Firewall and Security Groups

When running in the public cloud, you will be exposed to threats if your instances are open to the internet. It’s recommended for you to use a firewall (or security groups in cloud terminology) to only open the required ports. If ClusterControl is monitoring the cloud instances externally (i.e. from e.g. outside the security group that the databases are in), do open the relevant ports as listed in this documentation page. Otherwise, consider running ClusterControl in the same network as the databases. We recommend users to isolate their database infrastructure from the public Internet and just whitelist the known hosts or networks to connect to the database.

If you use ClusterControl command line client called s9s, you have to expose port 9501 as well. This is the ClusterControl backend (CMON) RPC interface that runs on TLS for the client to communicate. This service is configured to listen to localhost interface only by default. To configure this, take a look at this documentation. The s9s CLI tool allows you to control your database cluster through command lines remotely from anywhere, including your local workstation.

Instance Types and Specifications

As mentioned in the Hardware Requirements page, the minimum server requirement for ClusterControl is 2GB of RAM and 2 CPU cores. However, you can start with the smallest, lowest cost instance and resize it when the host requires more resources.

ClusterControl relies on a MySQL database to store and retrieve its monitoring data. During the ClusterControl installation, the installer script (install-cc) will configure the MySQL configuration options according to the resource it sees at this stage, especially the innodb_buffer_pool_size. If you upgrade the instance to a higher spec, do not forget to tweak this option accordingly inside the my.cnf and restart the MySQL service. A simple rule of thumb would be 50% of the total available memory in the host.

Take note that the following will affect the performance of the ClusterControl instance:

  • The number of hosts it monitors
  • MySQL server where ‘cmon’ and ‘dcps’ databases reside
  • Granularity of the sampling

Migrating a cloud instance from one site to another, either across availability zones or geographical regions is pretty straightforward. You can simply take a snapshot of the instance and fire up a new instance based on the taken snapshot in another region. ClusterControl relies on proper IP address configuration, it will not work correctly if the database is started as a new instance with a different IP address. To manage the new instance, it is advisable to re-import the database node to ClusterControl.

Automating and Managing Open Source Databases for Streaming Video Applications

$
0
0

With the global video streaming market expected to grow to US$70+ Billion and the video streaming software market to US$7+ Billion by 2022, it’s no surprise that video streaming is getting a lot of attention these days.

Who hasn’t sat in front of their laptop or held their tablet in their hands cursing the screen as the video or live show you’d been looking forward to watch painfully stops and goes due to lags in the video streaming application?

It’s also no surprise then that behind the scenes of this massive new market, the biggest database challenges with streaming video are maintaining uptime and scaling to ensure consistent delivery, no matter how many users are accessing the content. And no matter how much data that usage generates.

At Severalnines, we’re lucky to count a number of video streaming providers amongst our trusted customers, who rely on ClusterControl to automate and manage the open source databases that power their video streaming platforms. And they identified the following as their main database challenges:

  • Automation
  • Redundancy
  • Scaling
  • Performance

Companies or organisations who stream online media need advanced options for handling and distributing the database load required by the application. And our all-inclusive database automation and management system, ClusterControl, does exactly that and more for them.

We recently sat down with one these customers, StreamAMG, to discuss what their database challenges are. StreamAMG is Europe’s largest player in online video solutions, helping football teams such as Liverpool FC, Aston Villa, Sunderland AFC and the BBC keep fans watching from across the world.

Andrew de Bono, Platform Manager, explains how they manage these challenges with the help of ClusterControl. Watch the video interview below.

ClusterControl provides advanced scaling, management and fault tolerance features to allow you to maintain uptime and operate at peak performance.

To find out more about how ClusterControl helps overcome database challenges and get started with it yourself, visit our website on: https://severalnines.com/product/clustercontrol

Reference points: Video Streaming Market 70.05 Billion USD by 2021

New Tutorial: MySQL & MariaDB Load Balancing with ProxySQL

$
0
0

Severalnines is pleased to announce the launch of our new tutorial Database Load Balancing for MySQL and MariaDB with ProxySQL.

ProxySQL is a lightweight yet complex protocol-aware proxy that sits between the MySQL clients and servers. It is a gate, which basically separates clients from databases, and is therefore an entry point used to access all the database servers.

Included in this new tutorial….

  • Introduction to ProxySQL
  • Deep dive into ProxySQL concepts
  • How to install ProxySQL using ClusterControl
  • How to manage ProxySQL using ClusterControl
  • Managing multiple ProxySQL instances
  • ProxySQL failover handling
  • Use Cases including caching, rewriting, redirection and sharding

Load balancing and high availability go hand-in-hand, without it you are left with a single point of entry for your database and any spike in traffic could cause your setup to crash. ClusterControl makes it easy to deploy and configure several different load balancing technologies for MySQL and MariaDB, including ProxySQL, with a point-and-click graphical interface.

Check out our new tutorial to learn how to take advantage of this exciting new technology.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

ClusterControl for ProxySQL

ProxySQL enables MySQL, MariaDB and Percona XtraDB database systems to easily manage intense, high-traffic database applications without losing availability. ClusterControl offers advanced, point-and-click configuration management features for the load balancing technologies we support. We know the issues regularly faced and make it easy to customize and configure the load balancer for your unique application needs.

We are big fans of load balancing, and consider it to be an integral part of the database stack. ClusterControl has many things preconfigured to get you started with a couple of clicks. If you run into challenged we also provide resources and on-the-spot support to help ensure your configurations are running at peak performance.

ClusterControl delivers on an array of features to help deploy and manage ProxySQL

  • Advanced Graphical Interface - ClusterControl provides the only GUI on the market for the easy deployment, configuration and management of ProxySQL.
  • Point and Click deployment - With ClusterControl you’re able to apply point and click deployments to MySQL, MySQL replication, MySQL Cluster, Galera Cluster, MariaDB, MariaDB Galera Cluster, and Percona XtraDB technologies, as well the top related load balancers with HAProxy, MaxScale and ProxySQL.
  • Suite of monitoring graphs - With comprehensive reports you have a clear view of data points like connections, queries, data transfer and utilization, and more.

Configuration Management - Easily configure and manage your ProxySQL deployments with a simple UI. With ClusterControl you can create servers, reorientate your setup, create users, set rules, manage query routing, and enable variable configurations.

Watch the Replay: MySQL on Docker - Understanding the Basics

$
0
0

Thanks to everyone who joined us this week as we broadcast our MySQL on Docker webinar live from the Percona Live Conference in Dublin!

Our colleague Ashraf Sharif discussed how Docker containers work through to running a simple MySQL container as well as the ClusterControl Docker image (amongst other things)

If you missed the session or would like to watch it again, it’s now online for viewing.

Watch replay

Here’s the full agenda of the topics that were covered during the webinar. The content is aimed at MySQL users who are Docker beginners and who would like to understand the basics of running a MySQL container on Docker.

  • Docker and its components
  • Concept and terminology
  • How a Docker container works
  • Advantages and disadvantages
  • Stateless vs stateful
  • Docker images for MySQL
  • Running a simple MySQL container
  • The ClusterControl Docker image
  • Severalnines on Docker Hub

Watch replay

And if you’re not following our Docker blog series yet, we encourage you to do so: MySQL on Docker.

New webinar: how to automate and manage your MongoDB databases

$
0
0

Join us on October 24th for a different perspective on how to automate and manage your MongoDB or Percona Server for MongoDB databases.

In today’s business busy operations environments automation is key to performing fast, efficient and consistently repeatable software deployments and recovery. And there are many generic tools available, both commercial and open source, to aid with the automation of operational tasks.

However, there are a small number of specialist domain-specific automation tools available, and we are going to compare the MongoDB-relevant functionality of two of these products: MongoDB’s Ops Manager, and ClusterControl from Severalnines.

Sign up below to hear all about the differences between these tools, and how they help automate and manage MongoDB operations. We’ll be covering key aspects from installation and maintenance via backing up your deployments through to doing upgrades.

Date, Time & Registration

Europe/MEA/APAC

Tuesday, October 24th at 09:00 BST / 10:00 CEST (Germany, France, Sweden)

Register Now

North America/LatAm

Tuesday, October 24th at 09:00 PDT (US) / 12:00 EDT (US)

Register Now

Agenda

  • Installation and maintenance
  • Complexity of architecture
  • Options for redundancy
  • Comparative functionality
  • Monitoring, Dashboard, Alerting
  • Backing up your deployments
  • Automated deployment of advanced configurations
  • Upgrading existing deployments

Speaker

Ruairí Newman is passionate about all things cloud and automation and has worked for MongoDB, VMware and Amazon Web Services among others. He has a background in Operational Support Systems and Professional Services.

Prior to joining Severalnines, Ruairí worked for Huawei Ireland as Senior Cloud Solutions Architect on their Web Services project, where he advised on commodity cloud architecture and Monitoring technologies, and deployed and administered a Research & Development Openstack lab.


How to Automate Galera Cluster Using the ClusterControl CLI

$
0
0

As sysadmins and developers, we spend a lot our time in a terminal. So we brought ClusterControl to the terminal with our command line interface tool called s9s. s9s provides an easy interface to the ClusterControl RPC v2 API. You will find it very useful when working with large scale deployments, as the CLI allows will allow you to design more complex features and workflows.

This blog post showcases how to use s9s to automate the management of Galera Cluster for MySQL or MariaDB, as well as a simple master-slave replication setup.

Setup

You can find installation instructions for your particular OS in the documentation. What’s important to note is that if you happen to use the latest s9s-tools, from GitHub, there’s a slight change in the way you create a user. The following command will work fine:

s9s user --create --generate-key --controller="https://localhost:9501" dba

In general, there are two steps required if you want to configure CLI locally on the ClusterControl host. First, you need to create a user and then make some changes in the configuration file - all the steps are included in the documentation.

Deployment

Once the CLI has been configured correctly and has SSH access to your target database hosts, you can start the deployment process. At the time of writing, you can use the CLI to deploy MySQL, MariaDB and PostgreSQL clusters. Let’s start with an example of how to deploy Percona XtraDB Cluster 5.7. A single command is required to do that.

s9s cluster --create --cluster-type=galera --nodes="10.0.0.226;10.0.0.227;10.0.0.228"  --vendor=percona --provider-version=5.7 --db-admin-passwd="pass" --os-user=root --cluster-name="PXC_Cluster_57" --wait

Last option “--wait” means that the command will wait until the job completes, showing its progress. You can skip it if you want - in that case, the s9s command will return immediately to shell after it registers a new job in cmon. This is perfectly fine as cmon is the process which handles the job itself. You can always check the progress of a job separately, using:

root@vagrant:~# s9s job --list -l
--------------------------------------------------------------------------------------
Create Galera Cluster
Installing MySQL on 10.0.0.226                                           [██▊       ]
                                                                                                                                                                                                         26.09%
Created   : 2017-10-05 11:23:00    ID   : 1          Status : RUNNING
Started   : 2017-10-05 11:23:02    User : dba        Host   :
Ended     :                        Group: users
--------------------------------------------------------------------------------------
Total: 1

Let’s take a look at another example. This time we’ll create a new cluster, MySQL replication: simple master - slave pair. Again, a single command is enough:

root@vagrant:~# s9s cluster --create --nodes="10.0.0.229?master;10.0.0.230?slave" --vendor=percona --cluster-type=mysqlreplication --provider-version=5.7 --os-user=root --wait
Create MySQL Replication Cluster
/ Job  6 FINISHED   [██████████] 100% Cluster created

We can now verify that both clusters are up and running:

root@vagrant:~# s9s cluster --list --long
ID STATE   TYPE        OWNER GROUP NAME           COMMENT
 1 STARTED galera      dba   users PXC_Cluster_57 All nodes are operational.
 2 STARTED replication dba   users cluster_2      All nodes are operational.
Total: 2

Of course, all of this is also visible via the GUI:

Now, let’s add a ProxySQL loadbalancer:

root@vagrant:~# s9s cluster --add-node --nodes="proxysql://10.0.0.226" --cluster-id=1
WARNING: admin/admin
WARNING: proxy-monitor/proxy-monitor
Job with ID 7 registered.

This time we didn’t use ‘--wait’ option so, if we want to check the progress, we have to do it on our own. Please note that we are using a job ID which was returned by the previous command, so we’ll obtain information on this particular job only:

root@vagrant:~# s9s job --list --long --job-id=7
--------------------------------------------------------------------------------------
Add ProxySQL to Cluster
Waiting for ProxySQL                                                     [██████▋   ]
                                                                            65.00%
Created   : 2017-10-06 14:09:11    ID   : 7          Status : RUNNING
Started   : 2017-10-06 14:09:12    User : dba        Host   :
Ended     :                        Group: users
--------------------------------------------------------------------------------------
Total: 7

Scaling out

Nodes can be added to our Galera cluster via a single command:

s9s cluster --add-node --nodes 10.0.0.229 --cluster-id 1
Job with ID 8 registered.
root@vagrant:~# s9s job --list --job-id=8
ID CID STATE  OWNER GROUP CREATED  RDY  TITLE
 8   1 FAILED dba   users 14:15:52   0% Add Node to Cluster
Total: 8

Something went wrong. We can check what exactly happened:

root@vagrant:~# s9s job --log --job-id=8
addNode: Verifying job parameters.
10.0.0.229:3306: Adding host to cluster.
10.0.0.229:3306: Testing SSH to host.
10.0.0.229:3306: Installing node.
10.0.0.229:3306: Setup new node (installSoftware = true).
10.0.0.229:3306: Detected a running mysqld server. It must be uninstalled first, or you can also add it to ClusterControl.

Right, that IP is already used for our replication server. We should have used another, free IP. Let’s try that:

root@vagrant:~# s9s cluster --add-node --nodes 10.0.0.231 --cluster-id 1
Job with ID 9 registered.
root@vagrant:~# s9s job --list --job-id=9
ID CID STATE    OWNER GROUP CREATED  RDY  TITLE
 9   1 FINISHED dba   users 14:20:08 100% Add Node to Cluster
Total: 9

Managing

Let’s say we want to take a backup of our replication master. We can do that from the GUI but sometimes we may need to integrate it with external scripts. ClusterControl CLI would make a perfect fit for such case. Let’s check what clusters we have:

root@vagrant:~# s9s cluster --list --long
ID STATE   TYPE        OWNER GROUP NAME           COMMENT
 1 STARTED galera      dba   users PXC_Cluster_57 All nodes are operational.
 2 STARTED replication dba   users cluster_2      All nodes are operational.
Total: 2

Then, let’s check the hosts in our replication cluster, with cluster ID 2:

root@vagrant:~# s9s nodes --list --long --cluster-id=2
STAT VERSION       CID CLUSTER   HOST       PORT COMMENT
soM- 5.7.19-17-log   2 cluster_2 10.0.0.229 3306 Up and running
soS- 5.7.19-17-log   2 cluster_2 10.0.0.230 3306 Up and running
coC- 1.4.3.2145      2 cluster_2 10.0.2.15  9500 Up and running

As we can see, there are three hosts that ClusterControl knows about - two of them are MySQL hosts (10.0.0.229 and 10.0.0.230), the third one is the ClusterControl instance itself. Let’s print only the relevant MySQL hosts:

root@vagrant:~# s9s nodes --list --long --cluster-id=2 10.0.0.2*
STAT VERSION       CID CLUSTER   HOST       PORT COMMENT
soM- 5.7.19-17-log   2 cluster_2 10.0.0.229 3306 Up and running
soS- 5.7.19-17-log   2 cluster_2 10.0.0.230 3306 Up and running
Total: 3

In the “STAT” column you can see some characters there. For more information, we’d suggest to look into the manual page for s9s-nodes (man s9s-nodes). Here we’ll just summarize the most important bits. First character tells us about the type of the node: “s” means it’s regular MySQL node, “c” - ClusterControl controller. Second character describes the state of the node: “o” tells us it’s online. Third character - role of the node. Here “M” describes a master and “S” - a slave while “C” stands for controller. Final, fourth character tells us if the node is in maintenance mode. “-” means there’s no maintenance scheduled. Otherwise we’d see “M” here. So, from this data we can see that our master is a host with IP: 10.0.0.229. Let’s take a backup of it and store it on the controller.

root@vagrant:~# s9s backup --create --nodes=10.0.0.229 --cluster-id=2 --backup-method=xtrabackupfull --wait
Create Backup
| Job 12 FINISHED   [██████████] 100% Command ok

We can then verify if it indeed completed ok. Please note the “--backup-format” option which allows you to define which information should be printed:

root@vagrant:~# s9s backup --list --full --backup-format="Started: %B Completed: %E Method: %M Stored on: %S Size: %s %F\n" --cluster-id=2
Started: 15:29:11 Completed: 15:29:19 Method: xtrabackupfull Stored on: 10.0.0.229 Size: 543382 backup-full-2017-10-06_152911.xbstream.gz
Total 1
Severalnines
 
DevOps Guide to Database Management
Learn about what you need to know to automate and manage your open source databases

Monitoring

All databases have to be monitored. ClusterControl uses advisors to watch some of the metrics on both MySQL and the operating system. When a condition is met, a notification is sent. ClusterControl provides also an extensive set of graphs, both real-time as well as historical ones for post-mortem or capacity planning. Sometimes it would be great to have access to some of those metrics without having to go through the GUI. ClusterControl CLI makes it possible through the s9s-node command. Information on how to do that can be found in the manual page of s9s-node. We’ll show some examples of what you can do with CLI.

First of all, let’s take a look at the “--node-format” option to “s9s node” command. As you can see, there are plenty of options to print interesting content.

root@vagrant:~# s9s node --list --node-format "%N %T %R %c cores %u%% CPU utilization %fmG of free memory, %tMB/s of net TX+RX, %M\n""10.0.0.2*"
10.0.0.226 galera none 1 cores 13.823200% CPU utilization 0.503227G of free memory, 0.061036MB/s of net TX+RX, Up and running
10.0.0.227 galera none 1 cores 13.033900% CPU utilization 0.543209G of free memory, 0.053596MB/s of net TX+RX, Up and running
10.0.0.228 galera none 1 cores 12.929100% CPU utilization 0.541988G of free memory, 0.052066MB/s of net TX+RX, Up and running
10.0.0.226 proxysql  1 cores 13.823200% CPU utilization 0.503227G of free memory, 0.061036MB/s of net TX+RX, Process 'proxysql' is running.
10.0.0.231 galera none 1 cores 13.104700% CPU utilization 0.544048G of free memory, 0.045713MB/s of net TX+RX, Up and running
10.0.0.229 mysql master 1 cores 11.107300% CPU utilization 0.575871G of free memory, 0.035830MB/s of net TX+RX, Up and running
10.0.0.230 mysql slave 1 cores 9.861590% CPU utilization 0.580315G of free memory, 0.035451MB/s of net TX+RX, Up and running

With what we shown here, you probably can imagine some cases for automation. For example, you can watch the CPU utilization of the nodes and if it reaches some threshold, you can execute another s9s job to spin up a new node in the Galera cluster. You can also, for example, monitor memory utilization and send alerts if it passess some threshold.

The CLI can do more than that. First of all, it is possible to check the graphs from within the command line. Of course, those are not as feature-rich as graphs in the GUI, but sometimes it’s enough just to see a graph to find an unexpected pattern and decide if it is worth further investigation.

root@vagrant:~# s9s node --stat --cluster-id=1 --begin="00:00" --end="14:00" --graph=load 10.0.0.231
root@vagrant:~# s9s node --stat --cluster-id=1 --begin="00:00" --end="14:00" --graph=sqlqueries 10.0.0.231

During emergency situations, you may want to check resource utilization across the cluster. You can create a top-like output that combines data from all of the cluster nodes:

root@vagrant:~# s9s process --top --cluster-id=1
PXC_Cluster_57 - 14:38:01                                                                                                                                                               All nodes are operational.
4 hosts, 7 cores,  2.2 us,  3.1 sy, 94.7 id,  0.0 wa,  0.0 st,
GiB Mem : 2.9 total, 0.2 free, 0.9 used, 0.2 buffers, 1.6 cached
GiB Swap: 3 total, 0 used, 3 free,

PID   USER       HOST       PR  VIRT      RES    S   %CPU   %MEM COMMAND
 8331 root       10.0.2.15  20   743748    40948 S  10.28   5.40 cmon
26479 root       10.0.0.226 20   278532     6448 S   2.49   0.85 accounts-daemon
 5466 root       10.0.0.226 20    95372     7132 R   1.72   0.94 sshd
  651 root       10.0.0.227 20   278416     6184 S   1.37   0.82 accounts-daemon
  716 root       10.0.0.228 20   278304     6052 S   1.35   0.80 accounts-daemon
22447 n/a        10.0.0.226 20  2744444   148820 S   1.20  19.63 mysqld
  975 mysql      10.0.0.228 20  2733624   115212 S   1.18  15.20 mysqld
13691 n/a        10.0.0.227 20  2734104   130568 S   1.11  17.22 mysqld
22994 root       10.0.2.15  20    30400     9312 S   0.93   1.23 s9s
 9115 root       10.0.0.227 20    95368     7192 S   0.68   0.95 sshd
23768 root       10.0.0.228 20    95372     7160 S   0.67   0.94 sshd
15690 mysql      10.0.2.15  20  1102012   209056 S   0.67  27.58 mysqld
11471 root       10.0.0.226 20    95372     7392 S   0.17   0.98 sshd
22086 vagrant    10.0.2.15  20    95372     4960 S   0.17   0.65 sshd
 7282 root       10.0.0.226 20        0        0 S   0.09   0.00 kworker/u4:2
 9003 root       10.0.0.226 20        0        0 S   0.09   0.00 kworker/u4:1
 1195 root       10.0.0.227 20        0        0 S   0.09   0.00 kworker/u4:0
27240 root       10.0.0.227 20        0        0 S   0.09   0.00 kworker/1:1
 9933 root       10.0.0.227 20        0        0 S   0.09   0.00 kworker/u4:2
16181 root       10.0.0.228 20        0        0 S   0.08   0.00 kworker/u4:1
 1744 root       10.0.0.228 20        0        0 S   0.08   0.00 kworker/1:1
28506 root       10.0.0.228 20    95372     7348 S   0.08   0.97 sshd
  691 messagebus 10.0.0.228 20    42896     3872 S   0.08   0.51 dbus-daemon
11892 root       10.0.2.15  20        0        0 S   0.08   0.00 kworker/0:2
15609 root       10.0.2.15  20   403548    12908 S   0.08   1.70 apache2
  256 root       10.0.2.15  20        0        0 S   0.08   0.00 jbd2/dm-0-8
  840 root       10.0.2.15  20   316200     1308 S   0.08   0.17 VBoxService
14694 root       10.0.0.227 20    95368     7200 S   0.00   0.95 sshd
12724 n/a        10.0.0.227 20     4508     1780 S   0.00   0.23 mysqld_safe
10974 root       10.0.0.227 20    95368     7400 S   0.00   0.98 sshd
14712 root       10.0.0.227 20    95368     7384 S   0.00   0.97 sshd
16952 root       10.0.0.227 20    95368     7344 S   0.00   0.97 sshd
17025 root       10.0.0.227 20    95368     7100 S   0.00   0.94 sshd
27075 root       10.0.0.227 20        0        0 S   0.00   0.00 kworker/u4:1
27169 root       10.0.0.227 20        0        0 S   0.00   0.00 kworker/0:0
  881 root       10.0.0.227 20    37976      760 S   0.00   0.10 rpc.mountd
  100 root       10.0.0.227  0        0        0 S   0.00   0.00 deferwq
  102 root       10.0.0.227  0        0        0 S   0.00   0.00 bioset
11876 root       10.0.0.227 20     9588     2572 S   0.00   0.34 bash
11852 root       10.0.0.227 20    95368     7352 S   0.00   0.97 sshd
  104 root       10.0.0.227  0        0        0 S   0.00   0.00 kworker/1:1H

When you take a look at the top, you’ll see CPU and memory statistics aggregated across the whole cluster.

root@vagrant:~# s9s process --top --cluster-id=1
PXC_Cluster_57 - 14:38:01                                                                                                                                                               All nodes are operational.
4 hosts, 7 cores,  2.2 us,  3.1 sy, 94.7 id,  0.0 wa,  0.0 st,
GiB Mem : 2.9 total, 0.2 free, 0.9 used, 0.2 buffers, 1.6 cached
GiB Swap: 3 total, 0 used, 3 free,

Below you can find the list of processes from all of the nodes in the cluster.

PID   USER       HOST       PR  VIRT      RES    S   %CPU   %MEM COMMAND
 8331 root       10.0.2.15  20   743748    40948 S  10.28   5.40 cmon
26479 root       10.0.0.226 20   278532     6448 S   2.49   0.85 accounts-daemon
 5466 root       10.0.0.226 20    95372     7132 R   1.72   0.94 sshd
  651 root       10.0.0.227 20   278416     6184 S   1.37   0.82 accounts-daemon
  716 root       10.0.0.228 20   278304     6052 S   1.35   0.80 accounts-daemon
22447 n/a        10.0.0.226 20  2744444   148820 S   1.20  19.63 mysqld
  975 mysql      10.0.0.228 20  2733624   115212 S   1.18  15.20 mysqld
13691 n/a        10.0.0.227 20  2734104   130568 S   1.11  17.22 mysqld

This can be extremely useful if you need to figure out what’s causing the load and which node is the most affected one.

Hopefully, the CLI tool makes it easier for you to integrate ClusterControl with external scripts and infrastructure orchestration tools. We hope you’ll enjoy using this tool and if you have any feedback on how to improve it, feel free to let us know.

Automating and Managing MongoDB in the Cloud

$
0
0

Database management has traditionally been complex and time-consuming. Deployment, with the headaches of security, complex networking, backup planning and implementation, and monitoring, has been a headache. Scaling out your database cluster has been a major undertaking. And in a world where 24/7 availability and rapid disaster recovery is expected, managing even a single database cluster can be a full-time job.

Severalnines’ ClusterControl is a database deployment and management system that addresses the above, facilitating rapid deployment of redundant, secure database clusters or nodes, including advanced backup and monitoring functionality - whether on premise or in the cloud. With plugins supporting Nagios, PagerDuty, and Zabbix, among others, ClusterControl integrates well with existing infrastructure and tools to help you manage your database servers with confidence.

MongoDB is the leading NoSQL database server in the world today. Using ClusterControl, with which you can deploy and manage either official MongoDB or Percona Server for MongoDB, Percona’s competing offering incorporating MongoDB Enterprise features, we are going to walk through deploying a MongoDB Replica Set with three data nodes, and look at some of the features of the ClusterControl application.

We’re going to run through some key features of ClusterControl, especially as they pertain to MongoDB, using Amazon Web Services. Amazon Web Services (or AWS) is the largest Infrastructure as a Service cloud provider globally, hosting millions of users all over the world.It comprises many services for all use cases from virtually unlimited object storage with S3 and highly scalable virtual machine infrastructure using EC2 all the way to enterprise database warehousing with Redshift and even Machine Learning.

Once you’ve read this blog, you may also wish to read our DIY Cloud Database on Amazon Web Services Whitepaper, which discusses configuration and performance considerations for database servers in the AWS Cloud in more detail. In addition, we have Become a MongoDB DBA, a whitepaper with more in depth MongoDB-specific detail.

To begin, first you will need to deploy four AWS instances. For a production platform, the instance type should be carefully chosen based on the guidelines we have previously discussed, but for our purposes instances with 2 virtual CPUs and 4GB RAM will be sufficient. One of these nodes will host ClusterControl, the others will be used to deploy the three database nodes.

Begin by creating your database nodes’ security group, allowing inbound traffic on port 27017. There is no need to restrict outbound traffic, but should you wish to do so, allow outbound traffic on ports 1024-65535 to facilitate outbound communication from the database servers.

Next create the security group for your ClusterControl node. Allow inbound traffic on ports 22, and 80. Add this security group ID to your database nodes security group, and allow unrestricted TCP communication. This will facilitate communication between the two security groups, without allowing ssh access to the database nodes from external clients.

Launch the instances into their respective security groups, choosing for each instance a KeyPair for which you have the ssh key. For the purposes of this task, use the same KeyPair for all instances. If you have lost the ssh key for your KeyPair, you will have to create a new KeyPair. When launching the instances, do not choose the default Amazon Linux image, instead choose an AMI based on a supported operating system listed here. As I am using AWS region EU-CENTRAL-1, I will use community AMI ami-fa2df395, a CentOS 7.3 image, for this purpose.

If you have the AWS command line tools installed, use the aws ec2 describe-instances command detailed previously to confirm that your instances are running--otherwise view your instances in the AWS web console--and when confirmed, log in to the ClusterControl instance via ssh.

Copy the public key file you downloaded when creating your KeyPair to the ClusterControl instance. You can use the scp command for this purpose. For now, let’s leave it in the default /home/centos directory, the home directory of the centos user. I have called mine s9s.pem. You will need the wget tool installed; install it using the following command:

$ sudo yum -y install wget

To install ClusterControl, run the following commands:

$ wget http://www.severalnines.com/downloads/cmon/install-cc
$ chmod +x install-cc
$ ./install-cc # as root or sudo user

The installation will walk you through some initial questions, after which it will take a few minutes to retrieve and install dependencies using your operating system’s package manager.

When installation is complete, point your web browser to http://<address of your ClusterControl instance>. You can find the external facing address of the instance using the describe-instances command, or via the AWS web console.

Once you have successfully logged in, you will see the following screen, and can continue to deploy your MongoDB Replica Set.

Figure 1: Welcome to ClusterControl!

As you can see, ClusterControl can also import existing database clusters, allowing it to manage your existing infrastructure as easily as new deployments.

For our purposes, you are going to click Deploy Database Cluster. On the next screen you will see the selection of database servers and cluster types that ClusterControl supports. Click the tab labelled MongoDB ReplicaSet. Here the values with which you are concerned are SSH User, SSH Key Path, and Cluster Name. The port should already be 22, the default ssh port, and the AMI we are using does not require a Sudo Password.

Figure 2: Deploying a MongoDB Replica Set

The ssh user for the CentOS 7 AMI is centos, and the SSH Key Path is /home/centos/s9s.pem, or the appropriate path depending on your own Key file name. Let’s use MongoDB-RS0 as the Cluster Name. Accepting the default options, we click Continue.

Figure 3: Configuring your deployment

Here we can choose between the MongoDB official build, and a Percona build. Select whichever you prefer, and supply an admin user and password with which to configure MongoDB securely. Note that ClusterControl will not let you proceed unless you provide these details. Make a note of the credentials you have supplied, you will need them to log in to the deployed MongoDB database, if you wish to later use it. Now choose a Replica Set name, or accept the default. We are going to use the vendor repositories, but be aware that you can configure ClusterControl to use your own repositories or those of a third party, if you prefer.

Add your database nodes, one at a time. You can choose to use the external IP address, but if you provide the hostname, which is generally recommended, ClusterControl will record all network interfaces in the hosts, and you will be able to choose the interface on which you would like to deploy. Once you have added your three database nodes, click Deploy. ClusterControl will now deploy your MongoDB Replica Set. Click Full Job Details to observe as it carries out the configuration of your cluster. When the job is complete, go to the Database Clusters screen and see your cluster.

Figure 4: Auto Recovery

Taking a closer look, you can see that Auto Recovery is enabled at both a cluster and a node level; in the case of failures, ClusterControl will attempt to recover your cluster or the individual node having an issue. The green tick beside each node also displays the cluster’s health status at a glance.

Figure 5: Scheduling Backups

The last feature we will cover here is Backups. ClusterControl provides a backup feature that allows a full cluster consistent backup, or simply a standard mongodump backup if you prefer. It also provides the facility to create scheduled backups to run periodically to a schedule of your choosing. Backup retention is also handled, with the option to retain backups for a limited period, avoiding storage issues.

In this blog I’ve attempted to give you a brief overview of using ClusterControl with MongoDB, but there are many more features supported by ClusterControl. Deployment of Sharded Clusters, with hidden and/or delayed slaves, arbiters and other features are all available. More information is available on our website, where you can also find webinars, whitepapers, tutorials, and training, and try out ClusterControl free.

Percona’s Tyler Duzan Joins Us for “How to Automate and Manage Your MongoDB or Percona Server for MongoDB Database” Webinar

$
0
0

We’re excited to announce that Tyler Duzan, Product Manager at Percona, will join our colleague Ruairí Newman, Senior Support Engineer, on October 24th to present a different perspective on how to automate and manage your MongoDB or Percona Server for MongoDB databases.

Tyler will walk us through some of the key features of the drop-in compatible Percona Server for MongoDB as it compares to MongoDB, and how it is “Community ++”.

We had the chance to catch up with Tyler to ask him about what he’s going to be discussing during this webinar. Watch the interview here.

Ruairí will talk about the tools that are available, both commercial and open source, to aid with the automation of operational MongoDB tasks. There are a small number of specialist domain-specific automation tools available, and Ruairí will compare the MongoDB-relevant functionality of two of these products: MongoDB’s Ops Manager, and ClusterControl from Severalnines.

Sign up below to hear all about Percona Server for MongoDB and the differences between these two tools, and how they help automate and manage MongoDB operations. We’ll be covering key aspects from installation and maintenance via backing up your deployments through to doing upgrades.

“See” you there!

Date, Time & Registration

Europe/MEA/APAC

Tuesday, October 24th at 09:00 BST / 10:00 CEST (Germany, France, Sweden)

Register Now

North America/LatAm

Tuesday, October 24th at 09:00 PDT (US) / 12:00 EDT (US)

Register Now

Agenda

  • Introduction to Percona Server for MongoDB
  • How to automate and manage MongoDB
    • Installation and maintenance
    • Complexity of architecture
    • Options for redundancy
    • Comparative functionality
    • Monitoring, Dashboard, Alerting
    • Backing up your deployments
    • Automated deployment of advanced configurations
    • Upgrading existing deployments

Speakers

Ruairí Newman, Senior Support Engineer at Severalnines, is passionate about all things cloud and automation and has worked for MongoDB, VMware and Amazon Web Services among others. He has a background in Operational Support Systems and Professional Services.

Prior to joining Severalnines, Ruairí worked for Huawei Ireland as Senior Cloud Solutions Architect on their Web Services project, where he advised on commodity cloud architecture and Monitoring technologies, and deployed and administered a Research & Development Openstack lab.

Tyler Duzan: Prior to joining Percona as a Product Manager, Tyler spent almost 13 years as an operations and security engineer in a variety of different industries. Deciding to take his analytical mindset and strategic focus into new territory, Tyler is applying his knowledge to solving business problems for Percona customers with inventive solutions combining technology and services.

New ClusterControl Template for Zabbix

$
0
0

We’ve had Zabbix templates for ClusterControl available for three years now, so that Zabbix is able to connect to ClusterControl and retrieve monitoring data as well as alerts. We recently rewrote major parts of the templates to use the ClusterControl RPC interface.

In this blog post, we’ll have a look at what’s new. Note that we provide a bunch of other integrations with third party tools that you can take advantage of.

What has Changed?

In the older version, the Zabbix plugin made use of the ClusterControl CMONAPI to retrieve monitoring data, process the output and make it understandable to the Zabbix agent. Since version 1.3, ClusterControl switched to a new interface called CMON RPC which listens on port 9500 on the ClusterControl node (9501 if you run on TLS). This new API is an improved version of the CMONAPI interface. So the agent script now communicates with ClusterControl through this RPC interface. If you are on version 1.3 and later, you should upgrade to this new script as the old one will not work anymore. Some system calls in CMONAPI have been completely removed and replaced by CMON RPC.

Another significant change is you don't need to configure the RPC URL and token directly, as the script will automatically read the configuration options directly from ClusterControl's bootstrap.php file (default to /var/www/html/clustercontrol/bootstrap.php). Due to these changes, we deprecated another 2 files inside this template, which are clustercontrol.conf (configuration file for CMONAPI URL and token) and clustercontrol_api.sh (curl wrapper to connect to CMONAPI). The same dependencies are still required, like php-curl and php-cli, as explained in the readme file of this template.

Monitoring Data

The ClusterControl template returns the following monitoring data from multiple clusters:

  • Database cluster’s status - Report the cluster state - active, failed, degraded and unknown.
  • ClusterControl alarms (critical/warning) - Report the number of ClusterControl alarms based on severity.

As well as ClusterControl related services:

  • ClusterControl controller service (cmon)
  • ClusterControl web SSH service (cmon-ssh)
  • ClusterControl notification service (cmon-events)

It also supports multiple clusters, where you can specify the components and clusters to be monitored by the agent. You may skip some clusters (e.g, test cluster) for alerting by simply excluding the cluster ID from the list. The previous version had backup status as part of the monitoring data. However in this version, backup is a sub-component of alarms so monitoring the alarms should be enough.

Example Deployment

Installing the template should be pretty straightforward. Consider the following setup:

Our setup consists of a Galera Cluster, a standalone PostgreSQL server and two MySQL Replication nodes, all managed by a ClusterControl instance. The Zabbix agent will be installed on the ClusterControl node, and it uses some reporting scripts to talk to ClusterControl to retrieve monitoring data. The Zabbix server will use a ClusterControl template to talk to the Zabbix agent.

Zabbix Agent

  1. Installation instructions can be found on our GitHub repository page. To get the template, just clone the s9s-admin repository:

    $ git clone https://github.com/severalnines/s9s-admin
  2. Create a template directory for ClusterControl under /var/lib/zabbix and copy the scripts directory into it:

    $ mkdir -p /var/lib/zabbix/clustercontrol
    $ cp -Rf ~/s9s-admin/plugins/zabbix/agent/scripts /var/lib/zabbix/clustercontrol
  3. Copy the ClusterControl template user parameter file into /etc/zabbix/zabbix.agent.d/:

    $ cp -f ~/s9s-admin/plugins/zabbix/agent/userparameter_clustercontrol.conf /etc/zabbix/zabbix.agent.d/
  4. This template uses the ClusterControl CMON RPC interface to collect stats. The script will copy /var/www/html/clustercontrol/bootstrap.php into the template directory to read ClusterControl configuration options. If you are running on a non-default path for the ClusterControl UI, configure the exact path manually inside clustercontrol_stats.php, similar to the example below:

    $BOOTSTRAP_PATH = '/var/www/html/clustercontrol/bootstrap.php';
  5. Test the script by invoking a cluster ID and test argument:

    $ /var/lib/zabbix/clustercontrol/scripts/clustercontrol_stats.sh 1,2,3,4,5 test
    Cluster ID: 1, Cluster Name: MariaDB 10.1, Cluster Type: galera, Cluster Status: STARTED
    Cluster ID: 2, Cluster Name: PostgreSQL, Cluster Type: postgresql_single, Cluster Status: STARTED
    Cluster ID: 3, Cluster Name: MySQLRep, Cluster Type: replication, Cluster Status: STARTED
    Cluster ID 4 not found.
    Cluster ID 5 not found.

    ** This example shows that the ClusterControl instance has 3 clusters, although we defined 5 cluster IDs in the command line.

    You should get an output of your database cluster summary, indicating the script is able to retrieve information using the provided ClusterControl RPC interface with the correct token in bootstrap.php.

  6. Finally, restart the Zabbix agent:

    $ service zabbix-agent restart

Installation of the Zabbix agent is now complete.

Zabbix Server

  1. Download the Zabbix template file from here to your desktop.

  2. Import the XML template using the Zabbix UI (Configuration -> Templates -> Import). Verify if the import is OK by going to Configuration -> Templates -> ClusterControl Template -> Items and you should see something like this:

  3. Create/edit hosts and link them to the template "ClusterControl Template" (Configuration -> Hosts -> choose a host -> Templates tab):

You are done. The following is what you can expect to see in your Zabbix frontend UI if something goes wrong:

Summary

ClusterControl has a sophisticated alarming system. In cases where Zabbix is the main monitoring and alerting tool, this simple integration will unify your database alerts into Zabbix.

We would be happy to continue and improve this template, do let us know what kind of information that you would like to see in Zabbix in the comments section below.

The Galera Cluster & Severalnines Teams Present: How to Manage Galera Cluster with ClusterControl

$
0
0

Join us on November 14th 2017 as we combine forces with the Codership Galera Cluster Team to talk about how to manage Galera Cluster using ClusterControl!

Galera Cluster has become one of the most popular high availability solution for MySQL and MariaDB; and ClusterControl is the de facto automation and management system for Galera Cluster.

We’ll be joined by Seppo Jaakola, CEO of Codership - Galera Cluster, and together, we’ll demonstrate what it is that makes Galera Cluster such a popular high availability solution for MySQL and MariaDB and how to best manage it with ClusterControl.

We’ll discuss the latest features of Galera Cluster with Seppo, one of the creators of Galera Cluster. We’ll also demo how to automate it all from deployment, monitoring, backups, failover, recovery, rolling upgrades and scaling using the new ClusterControl CLI.

Sign up below!

Date, Time & Registration

Europe/MEA/APAC

Tuesday, November 14th at 09:00 GMT / 10:00 CET (Germany, France, Sweden)

Register Now

North America/LatAm

Tuesday, November 14th at 09:00 PT (US) / 12:00 ET (US)

Register Now

Agenda

  • Introduction
    • About Codership, the makers of Galera Cluster
    • About Severalnines, the makers of ClusterControl
  • What’s new with Galera Cluster
    • Core feature set overview
    • The latest features
    • What’s coming up
  • ClusterControl for Galera Cluster
    • Deployment
    • Monitoring
    • Management
    • Scaling
  • Live Demo
  • Q&A

Speakers

Seppo Jaakola, Founder of Codership, has over 20 years experience in software engineering. He started his professional career in Digisoft and Novo Group Oy working as a software engineer in various technical projects. He then worked for 10 years in Stonesoft Oy as a Project Manager in projects dealing with DBMS development, data security and firewall clustering. In 2003, Seppo Jaakola joined Continuent Oy, where he worked as team leader for MySQL clustering product. This position linked together his earlier experience in DBMS research and distributed computing. Now he’s applying his years of experience and administrative skills to steer Codership to a right course. Seppo Jaakola has MSc degree in Software Engineering from Helsinki University of Technology.

Krzysztof Książek, Senior Support Engineer at Severalnines, is a MySQL DBA with experience managing complex database environments for companies like Zendesk, Chegg, Pinterest and Flipboard.

Viewing all 195 articles
Browse latest View live