Understanding RCS
From the viewpoint of 2024, RCS is a somewhat archaic tool for tracking files. Newer systems such as cvs, subversion, and git have made it seem obsolete. However, for managing a single file here or there, it is not inherently bad, though it has some eccentricities.
Unlike git, RCS tracks a file’s location in the system, so that the current working directory does not matter. For example, the following works with RCS:
[root@a2a4a2b8b06f ~]# cd /etc
[root@a2a4a2b8b06f etc]# echo "Test" > test
[root@a2a4a2b8b06f etc]# ci -l test
test,v <-- test
enter description, terminated with single '.' or end of file:
NOTE: This is NOT the log message!
>> Test
>> .
initial revision: 1.1
done
[root@a2a4a2b8b06f etc]# rm test
rm: remove regular file 'test'? y
[root@a2a4a2b8b06f etc]# cd /root/
[root@a2a4a2b8b06f ~]# co -l /etc/test
/etc/test,v --> /etc/test
revision 1.1 (locked)
done
[root@a2a4a2b8b06f ~]# cat /etc/test
Test
[root@a2a4a2b8b06f ~]#
However, a similar attempt to manage files with git requires us to initialize the entire system with git. Putting a git repository in the root directory creates conflicts with any other git repositories in the system, and leaves open the possibility that someone will accidentally try to backup everything in the filesystem with git. That, of course, would cause space issues. The solution to this little problem with git is to use a tool such as etckeeper
which is designed to track the entire /etc
directory safely. The problem with placing only /etc
under revision control is that sometimes configuration files exist in other locations. This is particularly true if one is running some commercial application on a server.
It is because of these features that RCS is still useful. It is actually possible to use RCS in combination with etckeeper
. RCS tracks specific files that have been changed, while etckeeper
tracks the whole /etc
directory.
The primary challenge with RCS is its locking behaviour. By default, if you run ci file
it will actually remove the file. If you check the file in in an unlocked state using ci -u file
it will make file
readonly. This prevents further changes until someone runs co -l file
. This can be viewed in a positive or a negative light. Positively, it reminds administrators that they should use revision control. Negatively, it can interfere with fixing problems. It is also undesireable if one is using automation to manage the file and simply using RCS as a backup mechanism.
The solution to the above dilemma is to always use ci -l file
. This locks the file for editing, which is counter-intuitive. The whole idea with locking and unlocking files is based on the idea of having multiple users with their own user accounts managing files in a directory. It was developed in the 1980s. At that time it was not uncommon for programs to consist of a single file compiled directly with a compiler or have many users managing files under their own names in a directory. In that context, it was useful to place files under revision control and have the working copy come and go with ci
and co
. That paradigm is totally unsuitable for modern Linux systems in which the configuration files are deployed and sometimes updated by a package manager, owned by root, and ideally managed by some configuration management. The locks are meaningless if everyone is using sudo
, and the configuration file must be there for the system to work. For that reason, the use of ci -l
for any manual file changes is a better practice for any files managed by root. If you have regular users with access to some configuration files, unlocking files may still be useful.