Difference between revisions of "New machine setup"
m (→Modus operandi) |
|||
Line 44: | Line 44: | ||
The parenthesis denotes optional scheme. | The parenthesis denotes optional scheme. | ||
− | + | = Modus operandi = | |
The modus operandi is as follows: | The modus operandi is as follows: | ||
− | * | + | * Install infra's requisite dependencies (read per infra/cloud resources) |
* provision a vm from the dedicated infrastructure cloud provider | * provision a vm from the dedicated infrastructure cloud provider |
Revision as of 08:23, 14 June 2018
Setting up a new Software Heritage desktop machine
Debian install
- Stable
- root w/temporary password; no regular user (after setting up root password, cancel twice and jump forward to clock settings)
- full disk with LVM; reduce home LV to leave half of the disk free
- Standard system utilities, ssh server, no desktop environment (puppet will install that)
Base system setup (from console)
- Login as root
- Enable password root access in ssh (/etc/ssh/sshd_config, PermitRootLogin yes)
- Write down IP configuration and add the machine to the Gandi DNS
- Test SSH login as root from your workstation
- Stay at your desk :)
Full system setup (from your desk)
- SSH login as root
- Edit sources.list to add testing
- apt-get update, dist-upgrade, autoremove --purge
- While you wait, create Vpn certificates for the new machine
- add the machine to the puppet configuration, in the swh_desktop role
- apt-get install puppet openvpn
- configure openvpn per Vpn
- add pergamon IP address to /etc/resolv.conf
- add louvre.softwareheritage.org to /etc/hosts
- configure puppet
- systemctl disable puppet
- server=pergamon.internal.softwareheritage.org in /etc/puppet/puppet.conf
- puppet agent --enable
- puppet agent -t
- run puppet on pergamon to update munin server config
- set proper root password, add it to password store
- reboot
Setting up a new Virtual Machine (semi-manual process)
As a requisite step, clone the sysadm-provisioning repository.
Naming scheme
<machine-name>.(<zone>.:<hoster>).internal.softwareheritage.org.
The parenthesis denotes optional scheme.
Modus operandi
The modus operandi is as follows:
- Install infra's requisite dependencies (read per infra/cloud resources)
- provision a vm from the dedicated infrastructure cloud provider
- bootstrap packages puppet dependencies on that vm
- run puppet agent on that vm
- run puppet agent on the dns node
Example with azure
First, Install azure's requirements.
cd /path/to/sysadm-provisioning # historic implementation detail (really use the following user) # using this user (uid 1000) permits to simplify steps down the # line ADMIN_USER=zack # Create the vm 'worker01' with type 'worker' # (other possible types: db, replica, <whatever>) ./azure/create-vm.sh worker01 worker # retrieve ip of the new vm # then copy the provision-vm.sh script to run there # (this does as entertained earlier puppet bootstrap + run) scp ./azure/provision-vm.sh $ADMIN_USER@<ip>:/tmp ssh $ADMIN_USER@<ip> chmod +x /tmp/provision-vm.sh ssh $ADMIN_USER@<ip> /tmp/provision-vm.sh public # note that you could also connect to the node, install tmux, run a # tmux session, then trigger the script from within
After this, run the puppet agent on the dns server:
ssh <your-user>@pergamon.internal.softwareheritage.org sudo puppet agent --test
As always, the truth lies within the source code, details explained in comments:
Troubleshoot
Recreating machine with the same exact configuration
It so happens that we could scratch and recreate the same machine. We then need to clean up on the puppet-master the old certificate (based on the machine's fqdn).
puppet cert clean <fqdn>
Duplicate resource found error
For information, after a wrong manipulation (wrong hostname setup for example), you could end up having stale data in the puppet master (in puppetdb).
You would end up with the puppet agent complaining about duplicate resources found, for example:
A duplicate resource was found while collecting exported resources
That means there exists some stale data in the master (puppetdb). Here is the command to clean those up.
puppet node deactivate <wrong-fqdn>