From f327b06ed752197368839cfef3c111cd2b7244dc Mon Sep 17 00:00:00 2001 From: Alexey Shchepin Date: Wed, 6 Oct 2004 15:07:21 +0000 Subject: [PATCH] * doc/guide.tex: Updated SVN Revision: 274 --- ChangeLog | 2 + doc/guide.html | 206 +++++++++++++++++++++++++++++++++---------------- doc/guide.tex | 120 +++++++++++++++++++++++----- 3 files changed, 241 insertions(+), 87 deletions(-) diff --git a/ChangeLog b/ChangeLog index a004a93bc..3005a91a2 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,5 +1,7 @@ 2004-10-06 Alexey Shchepin + * doc/guide.tex: Updated + * src/ejabberd_s2s_out.erl: Fixed socket closing condition 2004-10-05 Alexey Shchepin diff --git a/doc/guide.html b/doc/guide.html index 7cfa86468..70b0ad376 100644 --- a/doc/guide.html +++ b/doc/guide.html @@ -77,34 +77,35 @@
  • 4.1.3  Session Manager
  • 4.1.4  S2S Manager +
  • 4.2  How to setup ejabberd cluster -
  • A  Built-in Modules +
  • A  Built-in Modules -
  • B  I18n/L10n +
  • B  I18n/L10n @@ -723,50 +724,119 @@ router;

    4.1.1  Router

    -This module is the main router of Jabber packets on each node. It routes -them based on their destinations domains. It has two tables: local and global -routes. First, domain of packet destination searched in local table, and if it -found, then the packet is routed to appropriate process. If no, then it -searches in global table, and is routed to the appropriate ejabberd node or -process. If it does not exists in either tables, then it sent to the S2S -manager.
    +This module is the main router of Jabber packets on each node. It +routes them based on their destinations domains. It uses a global +routing table. A domain of packet destination is searched in the +routing table, and if it is found, then the packet is routed to +appropriate process. If no, then it is sent to the S2S manager.

    4.1.2  Local Router

    -This module routes packets which have a destination domain equal to this server -name. If destination JID has a non-empty user part, then it routed to the -session manager, else it is processed depending on it's content.
    +This module routes packets which have a destination domain equal to +this server name. If destination JID has a non-empty user part, then +it is routed to the session manager, else it is processed depending on +its content.

    4.1.3  Session Manager

    -This module routes packets to local users. It searches for what user resource -packet must be sended via presence table. If this resource is connected to -this node, it is routed to C2S process, if it connected via another node, then -the packet is sent to session manager on that node.
    +This module routes packets to local users. It searches to what user +resource a packet must be sent via a presence table. Then packet is +either routed to appropriate C2S process, or stored in offline +storage, or bounced back.

    4.1.4  S2S Manager

    -This module routes packets to other Jabber servers. First, it checks if an -open S2S connection from the domain of the packet source to the domain of -packet destination already exists. If it is open on another node, then it -routes the packet to S2S manager on that node, if it is open on this node, then -it is routed to the process that serves this connection, and if a connection -does not exist, then it is opened and registered.
    +This module routes packets to other Jabber servers. First, it +checks if an opened S2S connection from the domain of the packet +source to the domain of packet destination is existing. If it is +existing, then the S2S manager routes the packet to the process +serving this connection, else a new connection is opened.
    +
    + + +

    4.2  How to setup ejabberd cluster

    + + +Suppose you already setuped ejabberd on one of machines (first), and +you need to setup another one to make ejabberd cluster. Then do +following steps: +
    1. +Copy ~ejabberd/.erlang.cookie file from first to + second.
      +
      +(alt) You can also add ``-cookie content_of_.erlang.cookie'' + option to all ``erl'' commands below.
      +
      +
    2. On second run under `ejabberd' user in a directory + where ejabberd will work later the following command: +
      +erl -sname ejabberd \
      +    -mnesia extra_db_nodes "['ejabberd@first']" \
      +    -s mnesia
      +
      This will start mnesia serving same DB as ejabberd@first. + You can check this running ``mnesia:info().'' command. You + should see a lot of remote tables and a line like the following: +
      +running db nodes   = [ejabberd@first, ejabberd@second]
      +
      + +
    3. Now run the following in the same ``erl'' session: +
      +mnesia:change_table_copy_type(schema, node(), disc_copies).
      +
      + This will create local disc storage for DB.
      +
      +(alt) Change storage type of `scheme' table to ``RAM and disc + copy'' on second node via web interface.
      +
      +
    4. Now you can add replicas of various tables to this node with + ``mnesia:add_table_copy'' or + ``mnesia:change_table_copy_type'' as above (just replace + ``schema'' with another table name and ``disc_copies'' + can be replaced with ``ram_copies'' or + ``disc_only_copies'').
      +
      +What tables to replicate is very depend on your needs, you can get + some hints from ``mnesia:info().'' command, by looking at + size of tables and default storage type for each table on 'first'.
      +
      +Replicating of table makes lookup in this table faster on this node, + but writing will be slower. And of course if machine with one of + replicas is down, other replicas will be used.
      +
      +Also section ``5.3 Table Fragmentation'' + here + can be useful.
      +
      +(alt) Same as in previous item, but for other tables.
      +
      +
    5. Run ``init:stop().'' or just ``q().'' to exit from + erlang shell. This probably can take some time if mnesia is not yet + transfer and process all data it needed from first.
      +
      +
    6. Now run ejabberd on second with almost the same config as + on first (you probably don't need to duplicate ``acl'' + and ``access'' options --- they will be taken from + first, and mod_muc and mod_irc should be + enabled only on one machine in cluster). +
    +You can repeat these steps for other machines supposed to serve this +domain.

    -

    A  Built-in Modules

    +

    A  Built-in Modules

    -

    A.1  Common Options

    +

    A.1  Common Options

    The following options are used by many modules, so they are described in @@ -774,7 +844,7 @@ separate section.

    -

    A.1.1  Option iqdisc

    +

    A.1.1  Option iqdisc

    Many modules define handlers for processing IQ queries of different namespaces @@ -807,7 +877,7 @@ Example: -

    A.1.2  Option host

    +

    A.1.2  Option host

    This option explicitly defines hostname for the module which acts as a service.
    @@ -823,7 +893,7 @@ Example: -

    A.2  mod_announce

    +

    A.2  mod_announce

    This module adds support for broadcast announce messages and MOTD. @@ -866,7 +936,7 @@ Example: -

    A.3  mod_configure

    +

    A.3  mod_configure

    Options: @@ -876,7 +946,7 @@ discipline (see A.1.1). -

    A.4  mod_disco

    +

    A.4  mod_disco

    This module adds support for JEP-0030 (Service Discovery).
    @@ -901,7 +971,7 @@ Example: -

    A.5  mod_echo

    +

    A.5  mod_echo

    This module acts as a service and simply returns to sender any Jabber packet. Module may be @@ -915,7 +985,7 @@ then prefix echo. is added to main ejabberd hostname. -

    A.6  mod_irc

    +

    A.6  mod_irc

    This module implements IRC transport.
    @@ -938,7 +1008,7 @@ Example: -

    A.7  mod_last

    +

    A.7  mod_last

    This module adds support for JEP-0012 (Last Activity)
    @@ -950,7 +1020,7 @@ discipline (see A.1.1). -

    A.8  mod_muc

    +

    A.8  mod_muc

    This module implements JEP-0045 (Multi-User Chat) service.
    @@ -985,14 +1055,14 @@ Example: -

    A.9  mod_offline

    +

    A.9  mod_offline

    This module implements offline message storage.

    -

    A.10  mod_privacy

    +

    A.10  mod_privacy

    This module implements Privacy Rules as defined in XMPP IM @@ -1005,7 +1075,7 @@ discipline (see A.1.1). -

    A.11  mod_private

    +

    A.11  mod_private

    This module adds support of JEP-0049 (Private XML Storage).
    @@ -1017,7 +1087,7 @@ discipline (see A.1.1). -

    A.12  mod_pubsub

    +

    A.12  mod_pubsub

    This module implements JEP-0060 (Publish-Subscribe Service).
    @@ -1042,7 +1112,7 @@ Example: -

    A.13  mod_register

    +

    A.13  mod_register

    This module adds support for JEP-0077 (In-Band Registration).
    @@ -1075,7 +1145,7 @@ Example: -

    A.14  mod_roster

    +

    A.14  mod_roster

    This module implements roster management.
    @@ -1087,7 +1157,7 @@ discipline (see A.1.1). -

    A.15  mod_service_log

    +

    A.15  mod_service_log

    This module adds support for logging of user packets via any jabber service. @@ -1110,7 +1180,7 @@ Example: -

    A.16  mod_stats

    +

    A.16  mod_stats

    This module adds support for JEP-0039 (Statistics Gathering).
    @@ -1124,7 +1194,7 @@ TBD about access.

    -

    A.17  mod_time

    +

    A.17  mod_time

    This module answers UTC time on jabber:iq:time queries.
    @@ -1136,7 +1206,7 @@ discipline (see A.1.1). -

    A.18  mod_vcard

    +

    A.18  mod_vcard

    This module implements simple Jabber User Directory (based on user vCards) @@ -1152,19 +1222,21 @@ discipline (see A.1.1).
    search
    Specifies wheather search is enabled (value is true, default) or disabled (value is false) by the service. If search is set to false, option host is ignored and service does not appear in Jabber Discovery items. +
    matches
    Limits the number of reported search results. If value is set to +infinity then all search results are reported. Default value is 30. Example:
       {modules,
        [
         ...
    -    {mod_vcard, [{search, false}]}
    +    {mod_vcard, [{search, false}, {matches, 20}]}
         ...
        ]}.
     
    -

    A.19  mod_version

    +

    A.19  mod_version

    This module answers ejabberd version on jabber:iq:version queries.
    @@ -1176,7 +1248,7 @@ discipline (see A.1.1). -

    B  I18n/L10n

    +

    B  I18n/L10n

    All built-in modules support xml:lang attribute inside IQ queries. diff --git a/doc/guide.tex b/doc/guide.tex index f656d9323..dd2775f41 100644 --- a/doc/guide.tex +++ b/doc/guide.tex @@ -721,38 +721,118 @@ Each \ejabberd{} node have following modules: \subsubsection{Router} -This module is the main router of \Jabber{} packets on each node. It routes -them based on their destinations domains. It has two tables: local and global -routes. First, domain of packet destination searched in local table, and if it -found, then the packet is routed to appropriate process. If no, then it -searches in global table, and is routed to the appropriate \ejabberd{} node or -process. If it does not exists in either tables, then it sent to the S2S -manager. +This module is the main router of \Jabber{} packets on each node. It +routes them based on their destinations domains. It uses a global +routing table. A domain of packet destination is searched in the +routing table, and if it is found, then the packet is routed to +appropriate process. If no, then it is sent to the S2S manager. \subsubsection{Local Router} -This module routes packets which have a destination domain equal to this server -name. If destination JID has a non-empty user part, then it routed to the -session manager, else it is processed depending on it's content. +This module routes packets which have a destination domain equal to +this server name. If destination JID has a non-empty user part, then +it is routed to the session manager, else it is processed depending on +its content. \subsubsection{Session Manager} -This module routes packets to local users. It searches for what user resource -packet must be sended via presence table. If this resource is connected to -this node, it is routed to C2S process, if it connected via another node, then -the packet is sent to session manager on that node. +This module routes packets to local users. It searches to what user +resource a packet must be sent via a presence table. Then packet is +either routed to appropriate C2S process, or stored in offline +storage, or bounced back. \subsubsection{S2S Manager} -This module routes packets to other \Jabber{} servers. First, it checks if an -open S2S connection from the domain of the packet source to the domain of -packet destination already exists. If it is open on another node, then it -routes the packet to S2S manager on that node, if it is open on this node, then -it is routed to the process that serves this connection, and if a connection -does not exist, then it is opened and registered. +This module routes packets to other \Jabber{} servers. First, it +checks if an opened S2S connection from the domain of the packet +source to the domain of packet destination is existing. If it is +existing, then the S2S manager routes the packet to the process +serving this connection, else a new connection is opened. + + +\subsection{How to setup ejabberd cluster} +\label{sec:cluster} + +Suppose you already setuped ejabberd on one of machines (\term{first}), and +you need to setup another one to make \ejabberd{} cluster. Then do +following steps: + +\begin{enumerate} +\item Copy \verb|~ejabberd/.erlang.cookie| file from \term{first} to + \term{second}. + + (alt) You can also add ``\verb|-cookie content_of_.erlang.cookie|'' + option to all ``\shell{erl}'' commands below. + +\item On \term{second} run under `\term{ejabberd}' user in a directory + where ejabberd will work later the following command: + +\begin{verbatim} +erl -sname ejabberd \ + -mnesia extra_db_nodes "['ejabberd@first']" \ + -s mnesia +\end{verbatim} + + This will start mnesia serving same DB as \node{ejabberd@first}. + You can check this running ``\verb|mnesia:info().|'' command. You + should see a lot of remote tables and a line like the following: + +\begin{verbatim} +running db nodes = [ejabberd@first, ejabberd@second] +\end{verbatim} + + +\item Now run the following in the same ``\shell{erl}'' session: + +\begin{verbatim} +mnesia:change_table_copy_type(schema, node(), disc_copies). +\end{verbatim} + + This will create local disc storage for DB. + + (alt) Change storage type of `\term{scheme}' table to ``RAM and disc + copy'' on second node via web interface. + + +\item Now you can add replicas of various tables to this node with + ``\verb|mnesia:add_table_copy|'' or + ``\verb|mnesia:change_table_copy_type|'' as above (just replace + ``\verb|schema|'' with another table name and ``\verb|disc_copies|'' + can be replaced with ``\verb|ram_copies|'' or + ``\verb|disc_only_copies|''). + + What tables to replicate is very depend on your needs, you can get + some hints from ``\verb|mnesia:info().|'' command, by looking at + size of tables and default storage type for each table on 'first'. + + Replicating of table makes lookup in this table faster on this node, + but writing will be slower. And of course if machine with one of + replicas is down, other replicas will be used. + + Also section ``5.3 Table Fragmentation'' + \footahref{http://erlang.org/doc/r9c/lib/mnesia-4.1.4/doc/html/part_frame.html}{here} + can be useful. + + (alt) Same as in previous item, but for other tables. + + +\item Run ``\verb|init:stop().|'' or just ``\verb|q().|'' to exit from + erlang shell. This probably can take some time if mnesia is not yet + transfer and process all data it needed from \term{first}. + + +\item Now run ejabberd on \term{second} with almost the same config as + on \term{first} (you probably don't need to duplicate ``\verb|acl|'' + and ``\verb|access|'' options --- they will be taken from + \term{first}, and \verb|mod_muc| and \verb|mod_irc| should be + enabled only on one machine in cluster). +\end{enumerate} + +You can repeat these steps for other machines supposed to serve this +domain. \appendix{}