Common Settings
The Hardware Recommendations section provides some hardware guidelines for
configuring a Ceph Storage Cluster. It is possible for a single Ceph
Node to run multiple daemons. For example, a single node with multiple drives
may run one ceph-osd
for each drive. Ideally, you will have a node for a
particular type of process. For example, some nodes may run ceph-osd
daemons, other nodes may run ceph-mds
daemons, and still other nodes may
run ceph-mon
daemons.
Each node has a name identified by the host
setting. Monitors also specify
a network address and port (i.e., domain name or IP address) identified by the
addr
setting. A basic configuration file will typically specify only
minimal settings for each instance of monitor daemons. For example:
[global]
mon_initial_members = ceph1
mon_host = 10.0.0.1
重要
The host
setting is the short name of the node (i.e., not
an fqdn). It is NOT an IP address either. Enter hostname -s
on
the command line to retrieve the name of the node. Do not use host
settings for anything other than initial monitors unless you are deploying
Ceph manually. You MUST NOT specify host
under individual daemons
when using deployment tools like chef
or cephadm
, as those tools
will enter the appropriate values for you in the cluster map.
Networks
See the Network Configuration Reference for a detailed discussion about configuring a network for use with Ceph.
Monitors
Production Ceph clusters typically provision a minimum of three Ceph Monitor daemons to ensure availability should a monitor instance crash. A minimum of three ensures that the Paxos algorithm can determine which version of the Ceph Cluster Map is the most recent from a majority of Ceph Monitors in the quorum.
备注
You may deploy Ceph with a single monitor, but if the instance fails, the lack of other monitors may interrupt data service availability.
Ceph Monitors normally listen on port 3300
for the new v2 protocol, and 6789
for the old v1 protocol.
By default, Ceph expects to store monitor data under the following path:
/var/lib/ceph/mon/$cluster-$id
You or a deployment tool (e.g., cephadm
) must create the corresponding
directory. With metavariables fully expressed and a cluster named "ceph", the
foregoing directory would evaluate to:
/var/lib/ceph/mon/ceph-a
For additional details, see the Monitor Config Reference.
Authentication
Bobtail 新版功能: 0.56
For Bobtail (v 0.56) and beyond, you should expressly enable or disable
authentication in the [global]
section of your Ceph configuration file.
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
Additionally, you should enable message signing. See Cephx Config Reference for details.
OSDs
Ceph production clusters typically deploy Ceph OSD Daemons where one node has one OSD daemon running a Filestore on one storage device. The BlueStore back end is now default, but when using Filestore you specify a journal size. For example:
[osd]
osd_journal_size = 10000
[osd.0]
host = {hostname} #manual deployments only.
By default, Ceph expects to store a Ceph OSD Daemon's data at the following path:
/var/lib/ceph/osd/$cluster-$id
You or a deployment tool (e.g., cephadm
) must create the corresponding
directory. With metavariables fully expressed and a cluster named "ceph", this
example would evaluate to:
/var/lib/ceph/osd/ceph-0
You may override this path using the osd_data
setting. We recommend not
changing the default location. Create the default directory on your OSD host.
ssh {osd-host}
sudo mkdir /var/lib/ceph/osd/ceph-{osd-number}
The osd_data
path ideally leads to a mount point with a device that is
separate from the device that contains the operating system and
daemons. If an OSD is to use a device other than the OS device, prepare it for
use with Ceph, and mount it to the directory you just created
ssh {new-osd-host}
sudo mkfs -t {fstype} /dev/{disk}
sudo mount -o user_xattr /dev/{hdd} /var/lib/ceph/osd/ceph-{osd-number}
We recommend using the xfs
file system when running
mkfs. (btrfs
and ext4
are not recommended and are no
longer tested.)
See the OSD Config Reference for additional configuration details.
Heartbeats
During runtime operations, Ceph OSD Daemons check up on other Ceph OSD Daemons and report their findings to the Ceph Monitor. You do not have to provide any settings. However, if you have network latency issues, you may wish to modify the settings.
See Configuring Monitor/OSD Interaction for additional details.
Logs / Debugging
Sometimes you may encounter issues with Ceph that require modifying logging output and using Ceph's debugging. See Debugging and Logging for details on log rotation.
Example ceph.conf
[global]
fsid = {cluster-id}
mon_initial_ members = {hostname}[, {hostname}]
mon_host = {ip-address}[, {ip-address}]
#All clusters have a front-side public network.
#If you have two network interfaces, you can configure a private / cluster
#network for RADOS object replication, heartbeats, backfill,
#recovery, etc.
public_network = {network}[, {network}]
#cluster_network = {network}[, {network}]
#Clusters require authentication by default.
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
#Choose reasonable numbers for journals, number of replicas
#and placement groups.
osd_journal_size = {n}
osd_pool_default_size = {n} # Write an object n times.
osd_pool_default_min size = {n} # Allow writing n copy in a degraded state.
osd_pool_default_pg num = {n}
osd_pool_default_pgp num = {n}
#Choose a reasonable crush leaf type.
#0 for a 1-node cluster.
#1 for a multi node cluster in a single rack
#2 for a multi node, multi chassis cluster with multiple hosts in a chassis
#3 for a multi node cluster with hosts across racks, etc.
osd_crush_chooseleaf_type = {n}
Naming Clusters (deprecated)
Each Ceph cluster has an internal name that is used as part of configuration
and log file names as well as directory and mountpoint names. This name
defaults to "ceph". Previous releases of Ceph allowed one to specify a custom
name instead, for example "ceph2". This was intended to facilitate running
multiple logical clusters on the same physical hardware, but in practice this
was rarely exploited and should no longer be attempted. Prior documentation
could also be misinterpreted as requiring unique cluster names in order to
use rbd-mirror
.
Custom cluster names are now considered deprecated and the ability to deploy them has already been removed from some tools, though existing custom name deployments continue to operate. The ability to run and manage clusters with custom names may be progressively removed by future Ceph releases, so it is strongly recommended to deploy all new clusters with the default name "ceph".
Some Ceph CLI commands accept an optional --cluster
(cluster name) option. This
option is present purely for backward compatibility and need not be accommodated
by new tools and deployments.
If you do need to allow multiple clusters to exist on the same host, please use Cephadm, which uses containers to fully isolate each cluster.