note: This is a follow up post from 2015-07-21-rcd-stonith
A Linux Cluster Base STONITH provider for use with modern Pacemaker clustersThis has since been accepted and merged into Fedora’s code base and as such will make it’s way to RHEL.
Source Code: Github Diptrace CAD Design: Github I have open sourced the CAD circuit design and made this available within this repo under CAD Design and Schematics Related RedHat Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1240868 v1 vs v2/v3 versions of the rcd_serial STONITH systemThe v2/v3 cables include the following improvements:
Have a connector on the outside of the server (that’s female side runs from the reset pin ‘hijacker’) so that cables can be easily disconnected.
Ever forgotten to add a critical service to monitoring?
Want to know if a service or process fails without explicitly monitoring every service on a host?
…Then why not use SystemD’s existing knowledge of all the enabled services? Thanks to ‘Kbyte’ who made a simple Nagios plugin to do just this!
Requirements Python3 (For RHEL/CentOS 7 yum install python34) python-nagiosplugin My pre-built RPMs or pip3 install nagiosplugin PyNagSystemD
Scripts and source available here: sql_ascii_to_utf8
The GoalTo be able to take a Postgres Database which is in SQL_ASCII encoding, and import it into a UTF8 encoded database.
Requirements:
Python3 (For RHEL/CentOS 7 yum install python34) python-nagiosplugin My pre-built RPMs or pip3 install nagiosplugin PyNagSystemD The ProblemPostreSQL will generate errors like this if it encounters any non-UTF8 byte-sequences during a database restore:
# pg_dump -Fc test_badchar | pg_restore -d test_badchar_utf8 pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC entry 2839; 0 26852 TABLE DATA table101 postgres pg_restore: [archiver (db)] COPY failed for table "table101": ERROR: invalid byte sequence for encoding "UTF8": 0x91 CONTEXT: COPY table101, line 1 WARNING: errors ignored on restore: 1 And the corresponding data will be omitted from the database (in this case, the whole table, even the rows which did not have a problem):
The most common way to use rsync is probably as such:
rsync -avr user@<source>:<source_dir> <dest_dir> Resulting in 30-35MB/s depending on file sizes
This can be improved by using a more efficient, less secure encryption algorithm, disabling compression and telling the SSH client to disable some unneeded features that slow things down.
With the settings below I have achieved 100MB/s (at work between VMs) and over 300MB/s at home between SSD drives.
Requirements Python3 (For RHEL/CentOS 7 yum install python34) python-nagiosplugin My pre-built RPMs or pip3 install nagiosplugin PyNagSystemD rsync -arv --numeric-ids --progress -e "ssh -T -c
[email protected] -o Compression=no -x" user@<source>:<source_dir> <dest_dir> If you want to delete files at the DST that have been deleted at the SRC (obviously use with caution:
This is a quick tldr there are many other situations and options you could consider FIO man page IOP/s = Input or Output operations per second Throughput = How many MB/s can you read/write continuously Variables worth tuning based on your situation --iodepth The iodepth is very dependant on your hardware.
Rotational drives without much cache and high latency (i.e. desktop SATA drives) will not benefit from a large iodepth, Values between 16 to 64 could be sensible.
High speed, lower latency SSDs (especially NVMe devices) can utilise a much higher iodepth, Values between 256 to 4096 could be sensible.
Let’s pretend you have a project on Gitlab called ask-izzy and you want to mirror it up to Gitlab which is located at https://github.com/ask-izzy/ask-izzy
Assuming you’re running Gitlab as the default user of git and that your repositories are stored in /mnt/repositories you can following something similar to the following instructions:
Grant write access to Github Get your Gitlab install’s pubkey from the git user
cat /home/git/.ssh/id_rsa.pub On Github add this pubkey as deploy key on the repo, make sure you tick the option to allow write access.
Add a post-receive hook to the Gitlab project mkdir /mnt/repositories/developers/ask-izzy.git/custom_hooks/ echo "exec git push --quiet github &" > \ /mnt/repositories/developers/ask-izzy.
Today we launched a mobile website for homeless people
… and it was launched by one of Australia’s many recent Prime Ministers
Today alone we served up over 87,000 requestsAs many of you know, I work with Infoxchange as the operations lead.
When I first heard the idea of a website or app for people that have found or are worried about finding themselves homeless in Australia I really didn’t think it made sense - until I saw the stats showing how many homeless people in Australia have regular access to a smart phone and data either via a cellular provider or free WiFi.
If a disk / VDI is orphaned or only partially deleted you’ll notice that under the SR it’s not assigned to any VM.
This can cause issues that look like metadata corruption resulting in the inability to migrate VMs or edit storage.
For example:
[root@xenserver-host ~]# xe vdi-destroy uuid=6c2cd848-ac0e-441c-9cd6-9865fca7fe8b Error code: SR_BACKEND_FAILURE_181 Error parameters: , Error in Metadata volume operation for SR. [opterr=VDI delete operation failed for parameters: /dev/VG_XenStorage-3ae1df17-06ee-7202-eb92-72c266134e16/MGT, 6c2cd848-ac0e-441c-9cd6-9865fca7fe8b. Error: Failed to write file with params [3, 0, 512, 512]. Error: 5], Removing stale VDIsTo fix this, you need to remove those VDIs from the SR after first deleting the logical volume:
“Having a SCSI ID is a f*cking idiotic thing to do.”
- Linus Torvalds
…and after the amount of time I’ve wasted getting XenServer to play nicely with LIO iSCSI failover I tend to agree.
The Problem One oddity of Xen / XenServer’s storage subsystem is that it identifies iSCSI storage repositories via a calculated SCSI ID rather than the iSCSI Serial - which would be the sane thing to do.
Citrix’s less than ideal take on dealing with SCSI ID changes is for you to take your VMs offline, disconnected the storage repositories, recreate them, then go through all your VMs and re-attach their orphaned disks hoping that you remembered to add some sort of hint as to what VM they belong to, then finally wipe the sweat and tears from your face.
Slides Failover Demo