C-spark v1.92.0.0

Name

C-spark -- User Management UI

Description

C-Spark is a tool for the mass registration of users within the system user database. It finds the differences between a given list and the user database, adds and deletes users. It thus removes the need for adding each user manually. It also supports operations on/with single users.

This application may come handy in companies, schools/colleges and/or similar where there is a large amount of people to manage and adding them would otherwise take long.

Finding differences is group-based: select a group to match the list against. This allows for multiple groups and lists.

Currently, C-Spark has the "autorun" (non-interactive) and CLI (simplistic interactive) interfaces. In future I hope to add a CUI (graphical interactive).

Program calling syntax

Due to its so-called "autorun" mode, there are a lot of options. They will be explained below, but in the green overview box, only the most important are shown.

cspark [-M module] [-P] [-S] [-U] [-c config]
-M backend Uses the specified backend rather than the default one listed in the SparkUI configuration file
-P Start the password list printing process. Only --P[iost] preceding a -P are valid for that one. (Since you can specify multiple -P.)

--Ph Show all available styles. Styles which require a template will have an asterisk shown next to them.
--Pi file Take the given logfile (as generated by -S as input
--Po file Write the beautified output to FILE
--Ps style Use the given style (and thus, file format) for the output file
--Pt template If required by the style, use the given file as the template for this style.
-S Start the synchronization process between the system user database and the given list. Only --S[gio] preceding a -S are valid for that one.
--Sg group Specify the system group to compare against within the next synchronization
--Si infile The XML Data Source file to read from
--So outfile The log file to which the actions taken (added, deleted) shall be written.
-U Add a single user using the parameters given with --U[bgnvx]. They also must precede a particular -U.
--Ub bday Scrambles the birthdate so it can be used as XUID. The scrambling is "cryptographically weak" and is a simple measure to stop curious users. The format of bday can be D.M.Y, M/D/Y or Y-M-D. Two- and three-digit years are accepted and will be handled respectively.
--Ug group The system group the user should be added to
--Un lastname The last name (Nachname) of this user
--Uv firstname The first name (Vorname) of this user
--Ux xuid Provide a XUID (External Unique Identification Number within the Data Source). Do not mix with --Ub!
-c config Loads configuration file (on top of the default hardcoded, /etc/spark.conf and cui/../vetc/spark.conf. See the Configuration File section for details.

Here are a few examples for the Autorun mode usage. If that's too complex for you, better use the Interactive mode by providing no Autorun arguments and/or parameters.

# cspark --Si staff.xml --So today.log --Sg staff -S
# cspark --Pi today.log --Po today.rtf --Ps sg_rtf --Pt supply/sg.rtf -P
# cspark --Uv Jan --Un Engelhardt --Ug staff --Ub 22.05.1986 -U

Configuration file

Configuration files are done in textual key=value-pair style. (Described in ACCDB API.) There are a lot of configurations available. All configuration options are discussed in the following. The spark.cfg from CVS contains what I experimented with, so if it says "default" somewhere, the hardcoded value and/or spark.cfg from official file package releases are meant.

AUTOFLUSH option

This is a special option for the backend module usage. If AUTOFLUSH is on for a backend module, it flushes dirty data to disk (or instructs its parent to do so).

This flushing only becomes necessary, if other applications read the user database behind Vitalnix ACCDB, because a newly created user might only exist within memory and the scope of ACCDB until the DB is flushed to disk. (SLANED solves all this...) Possible values are:

off Explicitly disabled ACCDB's AUTOFLUSH option. This is the default, as the time used to run through a user list is pretty small (at least for my test environment).
on Explicitly enables ACCDB's AUTOFLUSH option. (Might result in performance loss.)
default Does not touch ACCDB's AUTOFLUSH in any way. The default autoflush value depends on the particular backend module.
postadd About the same as off, and only performs a flush request before a USER_POSTADD script is run. Use this if you have a postadd script that deals with login names in a way that it would not work otherwise.

User management helper programs

MASTER_{PRE,POST}{ADD,DEL} specify scripts that are to be executed { before, after } the { addition, deletion } process. The available positional parameters are concluded below.

USER_{PRE,POST}{ADD,DEL} specify scripts that are to be executed { before, after } a user is { added, deleted }. Some of these might not be called if there are no users to { add, delete }. The available positional parameters are concluded below.

MASTER_PREADD %1$u The number of users about to be added
MASTER_POSTADD %1$u The number of users that were actually added
MASTER_PREDEL %1$u The number of users about to be deleted
  %2$s "Current" (= start of deletion function import_ds_Delete()) date and time ("YYYYMMDD-HHMMSS" format)
MASTER_POSTDEL %1$u The number of users that were actually deleted
  %2$s "Current" date and time ("YYYYMMDD-HHMMSS" format)
USER_PREADD %1$s Login name
  %2$ld UID of the user (may be -1 to indicate automatic UID selection (which has not been done yet!))
  %3$ld GID of the user's primary group
  %4$s Name of the user's primary group
  %5$s Names of the user's supplementary groups, separated by comma. (May be empty)
  %6$s GECOS field
  %7$s Home directory
  %8$s Default shell
USER_POSTADD   Same fields as for USER_PREADD
USER_PREDEL %1$s Login name
  %2$ld UID of the user
  %3$ld GID of the user's primary group
  %4$s Name of the user's primary group
  %5$s Home directory
  %6$s "Current" date and time ("YYYYMMDD-HHMMSS" format)
USER_POSTDEL %1$s Login name
  %2$s "Current" date and time ("YYYYMMDD-HHMMSS" format)

Programs

SHELL points to the default command interpreter for the user when s/he logs in. Default is /bin/bash.

Password related options

Three options related to password creation and storage are available: PSWD_LEN, PSWD_PHON and PSWD_METH.

PSWD_LEN controls the length of newly generated passwords. Use -1 to disable newly generated accounts using a password that never matches. Use 0 to start with no password. The latter may not always work as remote services may disallow empty passwords. See their source or their respective PAM configuration file.

If PSWD_PHON is set to yes (or on), special subroutines to generate pronouncible, easy-rememberable (and on top, secure) passwords are used. If this is no, the standard total-random-chars-and-numbers-algorithm is taken.

PSWD_METH specifies the encryption method to use. This can be des, md5 or blowfish. (The last is the most secure.) If you use "samba encrypted passwords" you will need to use ntlm (absolutely weak and currently not implemented).

Directory options

HOME points to the path where new directories shall be added. In most cases, this will be /home.

SPLIT_LVL enables how to arrange home directories. If you have a large amount of users, it may become advisable to split them up, since listing such a big directory (/home) containing some thousand entries takes some time. The solution to this is to move them to sub directories, i.e. to create /home/j/jengelh. This speeds up single lookups, and searching all /home using find utilities does not get any slower. The maximum value SPLIT_LVL is 2 (which would create /home/j/je/jengelh). The default is 0, meaning not to use this feature.

SKEL points to the skeleton directory. All files from the skeleton directory are copied to the user's directory upon user creation. The default is /var/lib/empty. (If you want to have all the dot.crap files change to /etc/skel.)

Finding differences

Involving the diff utility could be quite an idea, but it is a general differencer, comparing line-by-line. Since we just need to compare fields, and that conditionally on their content, the use of diff would make the whole program a lot more complicated.

The first step is to take an input file, and compare it against the current user database. Users which are found in both repositories are kept, those who are only in the Data Source have to be added and the rest is deleted. (Can be archived if demanded for.)

Input data format

Data fed to C-spark is in XML format. See the tags specification for details. Or check out some data source examples from the doc/examples folder.


June 06 2004 http://vitalnix.sf.net/