Ejabberd Installation and Operation Guide
January 26, 2003
1 Introduction
ejabberd is a Free and Open Source distributed fault-tolerant Jabber
server. It writen mostly in Erlang.
TBD
2 Installation
2.1 Installation Requirements
To compile ejabberd, you need following packages:
-
GNU Make;
- GCC;
- libexpat 1.95 or later;
- Erlang/OTP R8B or later.
2.2 Obtaining
Currently no stable version released.
Latest alpha version can be retrieved via CVS. Do following steps:
-
export CVSROOT=:pserver:cvs@www.jabber.ru:/var/spool/cvs
- cvs login
- Enter empty password
- cvs -z3 co ejabberd
2.3 Compilation
./configure
make
TBD
2.4 Starting
erl -name ejabberd -s ejabberd
TBD
3 Configuration
3.1 Initial Configuration
Configuration file is loaded after first start of ejabberd. It consists of
sequence of Erlang terms. Parts of lines after `%' sign are ignored.
Each term is tuple, where first element is name of option, and other are option
values.
3.1.1 Host Name
Option hostname defines name of Jabber domain that ejabberd
serves. E. g. to use jabber.org domain add following line in config:
{host, "jabber.org"}.
3.1.2 Listened Sockets
Option listen defines list of listened sockets and what services
runned on them. Each element of list is a tuple with following elements:
-
Port number;
- Module that serves this port;
- Function in this module that starts connection (likely will be removed);
- Options to this module.
Currently three modules implemented:
-
ejabberd_c2s: serves C2S connections;
- ejabberd_s2s_in: serves incoming S2S connections;
- ejabberd_service: serves connections to Jabber services
(i. e. that use jabber:component:accept namespace).
For example, following configuration defines that C2S connections listened on
port 5222, S2S on port 5269 and that service conference.jabber.org
must be connected to port 8888 with password ``secret''.
{listen, [{5222, ejabberd_c2s, start, []},
{5269, ejabberd_s2s_in, start, []},
{8888, ejabberd_service, start, ["conference.jabber.org", "secret"]}
]}.
3.1.3 Access Rules
Access control in ejabberd is done via Access Control Lists (ACL). In
config file they looks like this:
{acl, <aclname>, {<acltype>, ...}}.
<acltype> can be one of following:
-
all
- Matches all JIDs. Example:
{acl, all, all}.
- {user, <username>}
- Matches local user with name
<username>. Example:
{acl, admin, {user, "aleksey"}}.
- {user, <username>, <server>}
- Matches user with JID
<username>@<server>. Example:
{acl, admin, {user, "aleksey", "jabber.ru"}}.
- {server, <server>}
- Matches any JID from server
<server>. Example:
{acl, jabberorg, {server, "jabber.org"}}.
Allowing or denying of different services is like this:
{access, <accessname>, [{allow, <aclname>},
{deny, <aclname>},
...
]}.
When JID is checked to have access to <accessname>, server
sequentially checks if this JID in one of the ACLs that are second elements in
eache tuple in list. If one of them matched, then returned first element of
matched tuple. Else returned ``deny''.
Example:
{access, configure, [{allow, admin}]}.
{access, something, [{deny, badmans},
{allow, all}]}.
3.1.4 Modules
Option modules defines list of modules that will be loaded after
ejabberd startup. Each list element is a tuple where first element is a
name of module and second is list of options to this module. See
section 5 for detailed information on each module.
Example:
{modules, [
{mod_register, []},
{mod_roster, []},
{mod_configure, []},
{mod_disco, []},
{mod_stats, []},
{mod_vcard, []},
{mod_offline, []},
{mod_echo, [{host, "echo.localhost"}]},
{mod_private, []},
{mod_time, [{iqdisc, no_queue}]},
{mod_version, []}
]}.
3.2 Online Configuration
To use facility of online reconfiguration of ejabberd needed to have
mod_configure loaded (see section 5.4). Also highly
recommended to load mod_disco (see section 5.5), because
mod_configure highly integrates with it. Also recommended to use
disco- and xdata-capable client.
TBD
4 Distribution
4.1 How it works
Jabber domain is served by one or more ejabberd nodes. This nodes can be
runned on different machines that can be connected via network. They all must
have access to connect to port 4369 of all another nodes, and must have same
magic cookie (see Erlang/OTP documentation, in short file
ejabberd/.erlang.cookie must be the same on all nodes). This is
needed because all nodes exchange information about connected users, S2S
connection, registered services, etc...
Each ejabberd node run following modules:
-
router;
- local router.
- session manager;
- S2S manager;
4.1.1 Router
This module is the main router of Jabber packets on each node. It route
them based on their destanations domains. It have two tables: local and global
routes. First, domain of packet destination searched in local table, and if it
finded, then packet routed to appropriate process. If no, then it searched in
global table, and routed to appropriate ejabberd node or process. If it not
exists in both tables, then it sended to S2S manager.
4.1.2 Local Router
This module route packets which have destination domain equal to this server
name. If destination JID have node, then it routed to session manager, else it
processed depending on it content.
4.1.3 Session Manager
This module route packets to local users. It search to what user resource
packet must be sended via presence table. If this reseouce connected to this
node, it routed to C2S process, if it connected via another node, then packet
sended to session manager on it.
4.1.4 S2S Manager
This module route packets to another Jabber servers. First, it check if
to domain of packet destination from domain of source already opened S2S
connection. If it opened on another node, then it routed to S2S manager on
that node, if it opened on this node, then it routed to process that serve this
connection, and if this connection not exists, then it opened and registered.
5 Built-in Modules
5.1 Common Options
Following options used by many modules, so they described in separate section.
5.1.1 Option iqdisc
Many modules define handlers for processing IQ queries of different namespaces
to this server or to user (e. g. to myjabber.org or to
user@myjabber.org). This option defines processing discipline of this
queries. Possible values are:
-
no_queue
- All queries of namespace with this processing
discipline processed immediately. This also means that no other packets can
be processed until finished this. Hence this discipline is not recommended
if processing of query can take relative many time.
- one_queue
- In this case created separate queue for processing
IQ queries of namespace with this discipline, and processing of this queue
done in parallel with processing of other packets. This discipline is most
recommended.
- parallel
- In this case for all packets of namespace with this
discipline spawned separate Erlang process, so all this packets processed in
parallel. Although spawning of Erlang process have relative low cost, this
can broke server normal work, because Erlang have limit of 32000 processes.
Example:
{modules, [
...
{mod_time, [{iqdisc, no_queue}]},
...
]}.
5.1.2 Option host
Some modules may act as services, and wants to have different domain name.
This option explicitly defines this name.
Example:
{modules, [
...
{mod_echo, [{host, "echo.myjabber.org"}]},
...
]}.
5.2 mod_register
5.3 mod_roster
5.4 mod_configure
5.5 mod_disco
5.6 mod_stats
This module adds support of
JEP-0039 (Statistics Gathering).
Options:
-
iqdisc
- http://jabber.org/protocol/stats IQ queries
processing discipline.
TBD about access.
5.7 mod_vcard
5.8 mod_offline
5.9 mod_echo
5.10 mod_private
This module adds support of
JEP-0049 (Private XML
Storage).
Options:
-
iqdisc
- jabber:iq:private IQ queries processing discipline.
5.11 mod_time
This module answers UTC time on jabber:iq:time queries.
Options:
-
iqdisc
- jabber:iq:time IQ queries processing discipline.
5.12 mod_version
This module answers ejabberd version on jabber:iq:version queries.
Options:
-
iqdisc
- jabber:iq:version IQ queries processing discipline.
This document was translated from LATEX by
HEVEA.