手机版
你好,游客 登录 注册
背景:
阅读新闻

Ceph 文件系统安装

[日期:2013-05-31] 来源:Linux社区  作者:Linux [字体: ]

1. 环境部署

  • 我使用VirtualBox创建了六台虚拟机,其中监控节点一个,元数据节点两个,数据节点三个,并使用监控节点作为客户端。机器名和ip如下:

mon1.ihep.ac.cn

192.168.56.107

mds1.ihep.ac.cn

192.168.56.108

mds2.ihep.ac.cn

192.168.56.109

osd1.ihep.ac.cn

192.168.56.110

osd2.ihep.ac.cn

192.168.56.111

osd3.ihep.ac.cn

192.168.56.112

操作系统使用的的Fedora14,(这个内核版本是2.26.35,小小意见,最好使用2.26.34以上的版本,由于其内核包含Ceph客户端以及btrfs,低版本折腾起来太费时间,当然这也取决于你追求啥,我就是想搭个环境测一测ceph)

所有虚拟机特别要注意的是一定要关掉iptables和selinux,反正这是测试环境,有时候的问题往往是他们引起的。关掉省心!(这只是测试,生产环境悠着点)

机器之间不用密码直接登录。在任一台机器运行如下命令:

#ssh-keygen

#cd ~/.ssh; cat id_dsa.pub >>authorized_keys

#复制.ssh 文件夹到其他机器用户home目录上


2. 在六个节点都安装ceph,步骤基本一致(出去数据节点)

  • 下载ceph

#wget http://ceph.newdream.net/download/ceph-0.27.1.tar.gz

  • 解压,配置,安装

#tar xzvf ceph-0.27.1.tar.gz

#cd ceph-0.27.1

# ./autogen.sh
#./configure ##可能有些会提示你有些依赖包没装,装上就ok

# make
# make install

#cp ./src/sample.* /usr/local/etc/ceph/ ##拷贝ceph的配置文件,后面要修改

#mv /usr/local/etc/ceph/sample.ceph.conf /usr/local/etc/ceph/ceph.conf

#mv /usr/local/etc/ceph/sample.fetch_config /usr/local/etc/ceph/fetch_config

#cp ./src/init-ceph /etc/init.d/ceph

#mkdir /var/log/ceph; ##存放log,现在ceph自己还不自动建这个目录

#mkdir /data ##存储相应信息,下面配置会用到

  • 针对数据节点安装btrfs文件系统,特别注意,这一步只针对数据节点,其他类型节点没必要

#yum install btrfs-progs

#fdisk /dev/sda (进入以后做一些选择,不懂就man吧) ##这主要就是创建一个新的分区,我这里sda对应的磁盘还有一部分空余,所以在其上创建了一个/dev/sda4

#mkdf.btrfs /dev/sda4

 

3.配置。主要要修改的就是/usr/local/etc/ceph/下的两个文件ceph.conf和fetch_config。

  • 我使用的ceph.conf文件的内容;

; global
[global]
; enable secure authentication
;auth supported = cephx ##这个注释掉吧,用来授权访问的一个东东,测试没关系的

; allow ourselves to open a lot of files
max open files = 131072

; set up logging
log file = /var/log/ceph/$name.log

; set up pid files
pid file = /var/run/ceph/$name.pid

; monitors
; You need at least one. You need at least three if you want to
; tolerate any node failures. Always create an odd number.
[mon]
mon data = /data/mon$id

; logging, for debugging monitor crashes, in order of
; their likelihood of being helpful :)
;debug ms = 1
;debug mon = 20
;debug paxos = 20
;debug auth = 20

[mon.0]
host = mon1.ihep.ac.cn
mon addr = 192.168.56.107:6789
;[mon.1]
; host = beta
; mon addr = 192.168.0.11:6789

;[mon.2]
; host = gamma
; mon addr = 192.168.0.12:6789

; mds
; You need at least one. Define two to get a standby.
[mds]
; where the mds keeps it's secret encryption keys
keyring = /data/keyring.$name

; mds logging to debug issues.
;debug ms = 1
;debug mds = 20

[mds.alpha]
host = mds1.ihep.ac.cn

[mds.beta]
host = mds2.ihep.ac.cn

; osd
; You need at least one. Two if you want data to be replicated.
; Define as many as you like.
[osd]
; This is where the btrfs volume will be mounted.
osd data = /data/osd$id

; Ideally, make this a separate disk or partition. A few
; hundred MB should be enough; more if you have fast or many
; disks. You can use a file under the osd data dir if need be
; (e.g. /data/osd$id/journal), but it will be slower than a
; separate disk or partition.

; This is an example of a file-based journal.

osd journal = /data/osd$id/journal
osd journal size = 1000 ; journal size, in megabytes

; osd logging to debug osd issues, in order of likelihood of being
; helpful
;debug ms = 1
;debug osd = 20
;debug filestore = 20
;debug journal = 20

[osd.0]
host = osd1.ihep.ac.cn

; if 'btrfs devs' is not specified, you're responsible for
; setting up the 'osd data' dir. if it is not btrfs, things
; will behave up until you try to recover from a crash (which
; usually fine for basic testing).
btrfs devs = /dev/sda4

[osd.1]
host = osd2.ihep.ac.cn
btrfs devs = /dev/sda4

[osd.2]
host = osd3.ihep.ac.cn
btrfs devs = /dev/sda4

;[osd.3]
; host = eta
; btrfs devs = /dev/sdy

 

  • 我使用的ceph.conf文件的内容

#!/bin/sh
conf="$1"

## fetch ceph.conf from some remote location and save it to $conf.
##
## make sure this script is executable (chmod +x fetch_config)

##
## examples:
##

## from a locally accessible file

## from a URL:
# wget -q -O $conf
http://somewhere.com/some/ceph.conf

## via scp
# scp -i /path/to/id_dsa
user@host:/path/to/ceph.conf $conf

scp root@mon1.ihep.ac.cn:/usr/local/etc/ceph/ceph/conf $conf 

4. 创建文件系统并启动。下面都做都在监控节点做。 

#mkcephfs -a -c /usr/local/etc/ceph/ceph.conf --mkbtrfs

#mkdir /etc/ceph; cp /usr/local/etc/ceph/* /etc/ceph ##这个我记不清楚了,反正似乎有用

#/etc/init.d/ceph -a start 

5. 挂载 

#mkdir /ceph

#mount.ceph 192.168.56.107:/ /ceph

linux
相关资讯       Ceph 
本文评论   查看全部评论 (0)
表情: 表情 姓名: 字数

       

评论声明
  • 尊重网上道德,遵守中华人民共和国的各项有关法律法规
  • 承担一切因您的行为而直接或间接导致的民事或刑事法律责任
  • 本站管理人员有权保留或删除其管辖留言中的任意内容
  • 本站有权在网站内转载或引用您的评论
  • 参与本评论即表明您已经阅读并接受上述条款