Chargen.One

Chargen.One

The latest posts from Chargen.One.

from High5!

Recently we wrote a post on Moving back to Lighttpd and Michael Dexter thought I could spend my time wisely and do a short write-up on our use of dehydrated with Lighttpd.

In order to start with dehydrated we of course need to install it:

# pkg install dehydrated

Once it's all installed you can find the dehydrated configration /usr/local/etc/dehydrated

Your hosts and domains you want to get certificates for need to be added to domains.txt. For example:

example.com www.example.com example1.com secure.example1.com

The first host/domain listed will be used as filename to store the keys and certificates. There are a number of examples in the file itself if you want to get funky.

Hooks

If you want to restart services or do anything special, for example in the case when new certificates are generated, there is a file called hooks.sh. This script allows you to hook into any part of the process and run commands during that part of the process.

The hook we are using is for deploy_cert(). We are going to use this hook for: – creating a PEM certificate for Lighttpd – change owner to www – restart Lighttpd

What that looks like is something like this:

deploy_cert() {
    cat "${KEYFILE}" "${CERTFILE}" > "${BASEDIR}/certs/${DOMAIN}/combined.pem"
    chown -R www "${KEYFILE}" "${FULLCHAINFILE}" "${BASEDIR}/certs/${DOMAIN}/combined.pem"
    service lighttpd restart
}

The last part that is needed is to make sure this is run every day with cron.

@daily  root /usr/local/bin/dehydrated -c

In most cases this will be all that is needed to get going with dehydrated.

Lighttpd

You will need to let Lighttpd know about dehydrated and point it to acme-challange in the .well-known directory. You can do this with an alias like:

alias.url += ("/.well-known/acme-challenge/" => "/usr/local/www/dehydrated/")

The Lighttpd config we are using for SSL/TLS is the following:

$SERVER["socket"] == ":443" {
  ssl.engine = "enable" 
  ssl.pemfile = "/usr/local/etc/dehydrated/certs/example.com/combined.pem"
  ssl.ca-file = "/usr/local/etc/dehydrated/certs/example.com/chain.pem"
  ssl.cipher-list = "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-
CHACHA20-POLY1305:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384"
  ssl.dh-file = "/usr/local/etc/ssl/dhparam.pem" 
  ssl.ec-curve = "secp384r1"
  setenv.add-response-header = (
    "Strict-Transport-Security" => "max-age=31536000; includeSubdomains",
    "X-Frame-Options" => "SAMEORIGIN",
    "X-XSS-Protection" => "1; mode=block",
    "X-Content-Type-Options" => "nosniff",
    "Referrer-Policy" => "no-referrer",
    "Feature-Policy" =>  "geolocation none; midi none; notifications none; push none; sync-xhr none; microphone none; camera none; magnetometer none; gyroscope none; speaker none; vibrate none; fullscreen self; payment none; usb none;"  
  )
}

To finish it all you can now run dehydrated, in most cases would be:

# dehydrated -c

The complete Lighttpd config can be found in our Git Repository.

 
Read more...

from OpenBSD Amsterdam

The post written about rdist(1) on johan.huldtgren.com sparked us to write one as well. It's a great, underappreciated, tool. And we wanted to show how we wrapped doas(1) around it.

There are two services in our infrastructure for which we were looking to keep the configuration in sync and to reload the process when the configuration had indeed changed. There is a pair of nsd(8)/unbound(8) hosts and a pair of hosts running relayd(8)/httpd(8) with carp(4) between them.

We didn't have a requirement to go full configuration management with tools like Ansible or Salt Stack. And there wasn't any interest in building additional logic on top of rsync or repositories.

Enter rdist(1), rdist is a program to maintain identical copies of files over multiple hosts. It preserves the owner, group, mode, and mtime of files if possible and can update programs that are executing.

The only tricky part with rdist(1) is that in order to copy files and restart services, owned by a privileged user, has to be done by root. Our solution to the problem was to wrap doas(1) around rdist(1).

We decided to create a separate user account for rdist(1) to operate with on the destination host, for example:

ns2# useradd -m rupdate

Create an ssh key on the source host where you want to copy from:

ns1# ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_rdist

Copy the public key to the destination host for the rupdate user in .ssh/authorized_keys.

In order to wrap doas(1) around rdistd(1) we have to rename the original file. It's the only way we were able to do this.

Move rdistd to rdistd-orig on the destination host:

ns2# mv /usr/bin/rdistd /usr/bin/rdistd-orig

Create a new shell script rdistd with the following:

#!/bin/sh
/usr/bin/doas /usr/bin/rdistd-orig -S

Make it executable:

ns2# chmod 555 /usr/bin/rdistd

Add rupdate to doas.conf(5) like:

permit nopass rupdate as root cmd /usr/bin/rdistd
permit nopass rupdate as root cmd /usr/bin/rdistd-orig

Once that is all done we can create the files needed for rdist(1).

To copy the nsd(8) and unbound(8) configuration we created a distfile like:

HOSTS = ( rupdate@ns2.example.com )

FILES = ( /var/nsd )

EXCL = ( nsd.conf *.key *.pem )

${FILES} -> ${HOSTS}
	install ;
	except /var/nsd/db ;
	except /var/nsd/etc/${EXCL} ;
	except /var/nsd/run ;
	special "logger rdist update: $REMFILE" ;
	cmdspecial "rcctl reload nsd" ;

unbound:
/var/unbound/etc/unbound.conf -> ${HOSTS}
	install ;
	special "logger rdist update: $REMFILE" ;
	cmdspecial "rcctl reload unbound" ;

The distfile describes the destination HOSTS, the FILES which need to be copied and need to be EXCLuded. When it runs it will copy the selected FILES to the destination HOSTS, except the directories listed.

The install command is used to copy out-of-date files and/or directories.

The except command is used to update all of the files in the source list except for the files listed in name list.

The special command is used to specify sh(1) commands that are to be executed on the remote host after the file in name list is updated or installed.

The cmdspecial command is similar to the special command, except it is executed only when the entire command is completed instead of after each file is updated.

In our case the unbound(8) config doesn't change very often, so we used a label to only update this when needed. With:

ns1# rdist unbound

To keep our relayd(8)/httpd(8) in sync we did something like:

HOSTS = ( rupdate@relayd2.example.com )

FILES = ( /etc/acme /etc/ssl /etc/httpd.conf /etc/relayd.conf /etc/acme-client.conf )

${FILES} -> ${HOSTS}
	install ;
	special "logger rdist update: $REMFILE" ;
	cmdspecial "rcctl restart relayd httpd" ;

If you want cron(8) to pick this via the system script daily(8) you can save the file as /etc/Distfile.

To make sure the correct username and key are used you can add this to your .ssh/config file:

Host ns2.example.com
	User rupdate
	IdentityFile ~/.ssh/id_ed25519_rdist

When you don't store the distfile in /etc you can add the following to your .profile:

alias rdist='rdist -f ~/distfile'

Running rdist will result in the following type of logging on the destination host:

==> /var/log/daemon <==
Nov 13 09:59:15 name2 rdistd-orig[763]: ns2: startup for ns1.example.com

==> /var/log/messages <==
Nov 13 09:59:15 ns2 rupdate: rdist update: /var/nsd/zones/reverse/192.168.10.0

==> /var/log/daemon <==
Nov 13 09:59:16 ns2 nsd[164]: zone 10.168.192.in-addr.arpa read with success                     

You can follow us on Twitter and Mastodon.

 
Read more...

from High5!

There are some FreeBSD machines in our infrastructure which run NGINX. After the recent announcement on the F5 purchase of NGINX we decided to move back to Lighttpd.

We have not seen a lot of open source projects doing well after the parent company got acquired. We used Lighttpd in the past, before the project stalled, doesn’t seem to be the case anymore. We decided to check it out again.

The configuration discussed here is roughly what we used NGINX for.

A lot of the options within Lighttpd are enabled by using modules. These are the modules we have enabled on all our Lighttpd servers.

server.modules = (
  "mod_auth",
  "mod_expire",
  "mod_compress",
  "mod_rewrite",
  "mod_redirect",
  "mod_alias",
  "mod_access",
  "mod_setenv",
  "mod_evhost",
  "mod_fastcgi",
  "mod_accesslog",
  "mod_openssl"
)

To specify which IP and port Lighttpd listens on is defined in a couple of different ways. For IPv4 server.port and server.bind are used. For IPv6 you have to use $SERVER[“socket”]. The same is true for the SSL config.

server.port = "80"
server.bind = "0.0.0.0"
$SERVER["socket"] == "[::]:80" { }
$SERVER["socket"] == "[::]:443" { }
$SERVER["socket"] == ":443" {
  ssl.engine = "enable"
  ssl.pemfile = "/usr/local/etc/ssl/certs/example.com/combined.pem"
  ssl.ca-file = "/usr/local/etc/ssl/certs/example.com/chain.pem"
  ssl.cipher-list = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-
CHACHA20-POLY1305:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384"
  ssl.dh-file = "/usr/local/etc/ssl/certs/dhparam.pem"
  ssl.ec-curve = "secp384r1"
  setenv.add-response-header = (    "Strict-Transport-Security" => "max-age=31536000; includeSubdomains",
    "X-Frame-Options" => "SAMEORIGIN",
    "X-XSS-Protection" => "1; mode=block",
    "X-Content-Type-Options" => "nosniff",
    "Referrer-Policy" => "no-referrer",
    "Feature-Policy" =>  "geolocation none; midi none; notifications none; push none; sync-xhr none; microphone none; c
amera none; magnetometer none; gyroscope none; speaker none; vibrate none; fullscreen self; payment none; usb none;" 
  )
}

Lighttpd requires a PEM certificate. Which you can easily create with: # cat domain.key domain.crt > combined.pem

You can create the dhparam.pem file with: # openssl dhparam -out dhparam.pem 4096

These are the global settings we are using on FreeBSD related to the server settings.

server.username = "www"
server.groupname = "www"
server.pid-file = "/var/run/lighttpd.pid"
server.event-handler = "freebsd-kqueue"
server.stat-cache-engine = "disable"
server.max-write-idle = 720
server.tag = "unknown"
server.document-root = "/usr/local/www/default/"
server.error-handler-404 = "/404.html"
accesslog.filename = "/usr/local/www/logs/lighttpd.access.log"
server.errorlog = "/usr/local/www/logs/lighttpd.error.log"
server.dir-listing = "disable"

Some global settings which apply to all the websites served by Lighttpd.

index-file.names = ("index.php", "index.html", "index.htm")
url.access-deny = ("~", ".inc", ".sh", "sql", ".htaccess")
static-file.exclude-extensions = (".php", ".pl", ".fcgi")

Alias for Let's Encrypt.

alias.url += ("/.well-known/acme-challenge/" => "/usr/local/www/acme/")

Enable compression for certain filetypes.

compress.cache-dir = "/tmp/lighttpdcompress/"
compress.filetype = ("text/plain", "text/css", "text/xml", "text/javascript")

When authentication is needed you can specify this as below. Different backends are supported.

auth.backend = "htpasswd"
auth.backend.htpasswd.userfile = "/usr/local/etc/lighttpd/htpasswd"

General Expire and Cache-Control headers for certain filetypes.

$HTTP["url"] =~ "\.(js|css|png|jpg|jpeg|gif|ico)$" {
  expire.url = ( "" => "access plus 1 months" )
}

When you are running Wordpress sites you might want to deny access to certain urls.

$HTTP["url"] =~ "/(?:uploads|files|wp-content|wp-includes).*\.(php|phps|txt|md|exe)$" {
  url.access-deny = ("")
}
$HTTP["url"] =~ "/(wp-config|xmlrpc)\.php$" {
  url.access-deny = ("")
}

Define for which host and url the authentication is needed.

$HTTP["host"] =~ "www1.example.com" {
  auth.require = ( "/admin/" => (
    "method" => "basic",
    "realm" => "Restricted",
    "require" => "valid-user" )
  )
}

Redirect certain hosts from http to https.

$HTTP["host"] =~ "(www\.)?example.com" {
  url.redirect = ("^/(.*)" => "https://www.example.com/$1")
}

There is a module available which helps to assign the correct server.document-root for virtual hosts. This can be done with mod_evhost and we are using the following pattern:

$HTTP["host"] =~ "^(www.)?[^.]+\.[^.]+$" {
  evhost.path-pattern = "/usr/local/www/www.%2.%1/"
}

To be able to use pretty urls with Wordpress you can use the following mod_rewrite rules.

url.rewrite = (
  "^/(wp-.+).*/?" => "$0",
  "^/(.*)\.(.+)$" => "$0",
  "^/(.+)/?$" => "/index.php/$1"
)

The final piece of the puzzle for when you are using PHP-FPM the following config can be used.

fastcgi.server = ( ".php" =>
  ( "localhost" =>
    (
      "host" => "127.0.0.1",
      "port" => 9000
    )
  )
)

The complete config can be found in our Git Repository

 
Read more...

from V6Shell (Jeff)

It looks like my 1st (or 0th) post was blank. I was rather lost when I saw “Write ...” the first time around, but I get it now (and fixed it). Hahaha :^D

Post 1

Suppose my posts as they are and will be are like in C, ...

int
main(int argc, char *argv[])
{

    /* do nothing successfully */
    return 0;
}

.. where if (argv[0] == NULL || *argv[0] == '\0') can only be true if you don't want execve(2) to (be able to) execute your program successfully and return 0. If memory serves, you'll end up with a core dump too in all cases?? Could be! I don't remember the last time I tried; I have an old test case somewhere.

Either way, good fun all around it is :^)

 
Read more...

from V6Shell (Jeff)

Post 0 (or void)

The next 1 is better, and the other side of (the) void might help to liberate us all.

... I'll stop there .. Buenas noches por ahora .

 
Read more...

from OpenBSD Amsterdam

OpenBSD Amsterdam was in search of a lightweight toolset to keep track of resource usage, at a minimum the CPU load generated by the vmm(4)/vmd(8) hosts and the traffic from and to the hosts. A couple of weeks ago we ended up with a workable MRTG setup. While it worked, it didn't look very pretty.

In a moment of clarity, we thought about using RRDtool. Heck, why shouldn't we give it a try? From the previous tooling, we already had some required building blocks in place to make MRTG understand the CPU Cores and uptime from OpenBSD.

Before we start:

# pkg_add rrdtool

We decided to split the collection of the different OIDs (SNMP Object Identifiers) into three different scripts, which cron(1) calls, from a wrapper script.

  • uptime.sh
  • cpu_load.sh
  • interface.sh

uptime.sh

#!/bin/sh
test -n "$1" || exit 1
HOST="$1"
COMMUNITY="public"
UPTIMEINFO="/tmp/${HOST}-uptime.txt"
TICKS=$(snmpctl snmp get ${HOST} community ${COMMUNITY} oid hrSystemUptime.0 | cut -d= -f2)
DAYS=$(echo "${TICKS}/8640000" | bc -l)
HOURS=$(echo "0.${DAYS##*.} * 24" | bc -l)
MINUTES=$(echo "0.${HOURS##*.} * 60" | bc -l)
SECS=$(echo "0.${MINUTES##*.} * 60" | bc -l)
test -n "$DAYS" && printf '%s days, ' "${DAYS%.*}" > ${UPTIMEINFO}
printf '%02d\\:%02d\\:%02d\n' "${HOURS%.*}" "${MINUTES%.*}" "${SECS%.*}" >> ${UPTIMEINFO}

This is a seperate script, due to the uptime usage of both hosts in both graphs.

The origins for this script can be found detailled in our MRTG Setup.

cpu_load.sh

test -n "$1" || exit 1
HOST="$1"
COMMUNITY="public"
RRDFILES="/var/rrdtool"
IMAGES="/var/www/htdocs"
WATERMARK="OpenBSD Amsterdam - https://obsda.ms"
RRDTOOL="/usr/local/bin/rrdtool"
CPUINFO="/tmp/${HOST}-cpu.txt"
UPTIME=$(cat /tmp/${HOST}-uptime.txt)
NOW=$(date "+%Y-%m-%d %H:%M:%S %Z" | sed 's/:/\\:/g')

if ! test -f "${RRDFILES}/${HOST}-cpu.rrd"
then
echo "Creating ${RRDFILES}/${HOST}-cpu.rrd"
${RRDTOOL} create ${RRDFILES}/${HOST}-cpu.rrd \
        --step 300 \
        DS:ds0:GAUGE:600:U:U \
        RRA:MAX:0.5:1:20000
fi

snmpctl snmp walk ${HOST} community ${COMMUNITY} oid hrProcessorLoad | cut -d= -f2 > ${CPUINFO}
CORES=$(grep -cv "^0$" ${CPUINFO})
CPU_LOAD_SUM=$(awk '{sum += $1} END {print sum}' ${CPUINFO})
CPU_LOAD=$(echo "scale=2; ${CPU_LOAD_SUM}/${CORES}" | bc -l)

${RRDTOOL} update ${RRDFILES}/${HOST}-cpu.rrd N:${CPU_LOAD}

${RRDTOOL} graph ${IMAGES}/${HOST}-cpu.png \
        --start -43200 \
        --title "${HOST} - CPU" \
        --vertical-label "% CPU Used" \
        --watermark "${WATERMARK}" \
        DEF:CPU=${RRDFILES}/${HOST}-cpu.rrd:ds0:AVERAGE \
        AREA:CPU#FFCC00 \
        LINE2:CPU#CC0033:"CPU" \
        GPRINT:CPU:MAX:"Max\:%2.2lf %s" \
        GPRINT:CPU:AVERAGE:"Average\:%2.2lf %s" \
        GPRINT:CPU:LAST:" Current\:%2.2lf %s\n" \
        COMMENT:"\\n" \
        COMMENT:"  SUM CPU Load / Active Cores = % CPU Used\n" \
        COMMENT:"  Up for ${UPTIME} at ${NOW}"

On the first run, RRDtool will create the .rrd file. On every subsequent run, it will update the file with the collected values and update the graph.

The origins for this script can be found detailled in our MRTG Setup.

interface.sh

test -n "$1" || exit 1                                                                             
test -n "$2" || exit 1                                                                             
HOST="$1"                                                                                          
INTERFACE="$2"                                                                                     
COMMUNITY="public"                                                                                 
RRDFILES="/var/rrdtool"
IMAGES="/var/www/htdocs"
WATERMARK="OpenBSD Amsterdam - https://obsda.ms"
RRDTOOL="/usr/local/bin/rrdtool"
UPTIME=$(cat /tmp/${HOST}-uptime.txt)
NOW=$(date "+%Y-%m-%d %H:%M:%S %Z" | sed 's/:/\\:/g')                                              

if ! test -f "${RRDFILES}/${HOST}-${INTERFACE}.rrd"                                                
then
echo "Creating ${RRDFILES}/${HOST}-${INTERFACE}.rrd"                                               
${RRDTOOL} create ${RRDFILES}/${HOST}-${INTERFACE}.rrd \                                           
        --step 300 \
        DS:ds0:COUNTER:600:0:1250000000 \
        DS:ds1:COUNTER:600:0:1250000000  \
        RRA:AVERAGE:0.5:1:600 \
        RRA:AVERAGE:0.5:6:700 \
        RRA:AVERAGE:0.5:24:775 \
        RRA:AVERAGE:0.5:288:797 \
        RRA:MAX:0.5:1:600 \
        RRA:MAX:0.5:6:700 \
        RRA:MAX:0.5:24:775 \
        RRA:MAX:0.5:288:797
fi

IN=$(snmpctl snmp get ${HOST} community ${COMMUNITY} oid ifInOctets.${INTERFACE} | cut -d= -f2)    
OUT=$(snmpctl snmp get ${HOST} community ${COMMUNITY} oid ifOutOctets.${INTERFACE} | cut -d= -f2)  
DESCR=$(snmpctl snmp get ${HOST} community ${COMMUNITY} oid ifDescr.${INTERFACE} | cut -d= -f2 | tr
-d '"')

${RRDTOOL} update ${RRDFILES}/${HOST}-${INTERFACE}.rrd N:${IN}:${OUT}                              

${RRDTOOL} graph ${IMAGES}/${HOST}-${INTERFACE}.png \                                              
        --start -43200 \
        --title "${HOST} - ${DESCR}" \
        --vertical-label "Bits per Second" \
        --watermark "${WATERMARK}" \
        DEF:IN=${RRDFILES}/${HOST}-${INTERFACE}.rrd:ds0:AVERAGE \                                  
        DEF:OUT=${RRDFILES}/${HOST}-${INTERFACE}.rrd:ds1:AVERAGE \                                 
        CDEF:IN_CDEF="IN,8,*" \
        CDEF:OUT_CDEF="OUT,8,*" \
        AREA:IN_CDEF#00FF00:"In " \
        GPRINT:IN_CDEF:MAX:"Max\:%5.2lf %s" \
        GPRINT:IN_CDEF:AVERAGE:"Average\:%5.2lf %s" \                                              
        GPRINT:IN_CDEF:LAST:" Current\:%5.2lf %s\n" \                                              
        LINE2:OUT_CDEF#0000FF:"Out" \
        GPRINT:OUT_CDEF:MAX:"Max\:%5.2lf %s" \
        GPRINT:OUT_CDEF:AVERAGE:"Average\:%5.2lf %s" \                                             
        GPRINT:OUT_CDEF:LAST:" Current\:%5.2lf %s\n" \                                             
        COMMENT:"\\n" \
        COMMENT:"  Up for ${UPTIME} at ${NOW}"

To pinpoint the network interface you want to measure the bandwith for, this command prints the available interfaces:

snmpctl snmp walk  community  oid ifDescr

This will output a list like:

ifDescr.1="em0"
ifDescr.2="em1"
ifDescr.3="enc0"
ifDescr.4="lo0"
ifDescr.5="bridge880"
ifDescr.6="vlan880"
ifDescr.13="pflog0"
ifDescr.669="tap0"
ifDescr.670="tap1"

The number behind ifDescr is the one that you need to feed to interface.sh, for example:

# interface.sh  5

Finally the wrapper.sh script calls all the aforementioned scripts:

#!/bin/sh
SCRIPTS="/var/rrdtool"
for i in $(jot 2 1); do ${SCRIPTS}/uptime.sh host${i}.domain.tld; done
for i in $(jot 2 1); do ${SCRIPTS}/cpu_load.sh host${i}.domain.tld; done
${SCRIPTS}/interface.sh host1.domain.tld 12
${SCRIPTS}/interface.sh host2.domain.tld 11

The resulting graphs:

To serve the graphs we use httpd(8) with the following config:

server "default" {
        listen on * port 80
        location "/.well-known/acme-challenge/*" {
                root "/acme"
                request strip 2
        }
        location * {
                block return 302 "https://$HTTP_HOST$REQUEST_URI"
        }
}

server "default" {
        listen on * tls port 443
        tls {
                certificate "/etc/ssl/default-fullchain.pem"
                key "/etc/ssl/private/default.key"
        }
        location "/.well-known/acme-challenge/*" {
                root "/acme"
                request strip 2
        }
        root "/htdocs"
}

All the scripts can be found in our Git Repository.

You can follow us on Twitter and Mastodon.

 
Read more...

from h3artbl33d

Email

You and I need to have a serious talk about email. I have liberated my email and want to share the experience with you, so you are informed enough to decide whether you want to do the same.

The bad

Currently, the top 10 percent of all mx records mainly consist of Google, with GoDaddy in the second position, as can be gathered from these statistics*:

Mailserver Count % Of total
mailstore1.secureserver.net 22,989,327 2,53%
smtp.secureserver.net 22,984,706 2,54%
aspmx.l.google.com 10,141,392 1,11%
alt1.aspmx.l.google.com 9,878,764 1,09%
alt1.aspmx.l.google.com 9,800,303 1,08%
aspmx2.googlemail.com 5,607,263 0,62%
aspmx3.googlemail.com 5,477,548 0,60%
mail.b-io.co 4,449,479 0.49%
alt3.aspmx.l.google.com 4,121,725 0,45%
alt4.aspmx.l.google.com 4,057,221 0,45%

This is bad for a couple of reasons:

  • Neither Google, nor GoDaddy give a flying f*ck about your privacy
  • It's centralization at its worst
  • Everything stored at a few parties, really?

Let's walk through these arguments:

The first argument, privacy, should be obvious. Facebook is very hostile towards user privacy, but Google is even worse. Gmail is offered free of charge, since you are the product. You are an awesome human being – you deserve better. Way better.

And so do the human beings you exchange messages with! Perhaps you haven't thought of this before, but with the usage of Gmail, you also made the choice for the other parties. Every message they send to you – a Gmail user – gets stored on the servers of the big bad G, only to be kept an indefinite amount of time. And logically, this also goes for every message you send to them.

The second argument, centralization, is against the design of the world wide web. It's supposed to be a place to share knowledge, collaborate and to be used to heighten the efficiency of our daily lives. It sure as hell wasn't meant to be controlled by a handful of commercial parties.

Furthermore, while perhaps convenient, it's bad that a few select parties have a huge amount of data, that combined and intertwined is your whole digital persona.

The ugly

Email itself is an old fashioned protocol. It was never designed to mitigate modern threats, nor is it designed to be free of eavesdropping. While more and more mailservers use traffic encryption (TLS) to exchange messages, this is still optional.

A different initiative, GPG – allowing to encrypt the content of the message itself – has failed miserably, because it's too hard to use for the average user. It's easy to make mistakes, especially with frequent usage. And while it allows encryption of the message content, it doesn't do anything about the metadata (to, from, subject, etc).

The good

Last, but certainly not least: this is not the end. It sure as hell isn't too late. The tide can still be turned! And even easier: you can still reclaim the ownership of your mailbox and make sure that your privacy – and the privacy of your contacts – is still respected.

Mainly, there are a couple of ways that aren't hard, to reclaim your inbox:

  • Host your own email server; probably the hardest, but also the most efficient. You could setup your own server at home, throw OpenBSD on it alongside with Dovecot and OpenSMTPd – or use a script like Caesonia to help you with the installation.
  • Go with a privacy friendly provider; much less of a hassle. Popular providers include Mailbox, Mailfence, Fastmail and Protonmail – with the latter not supporting IMAP, POP3 and SMTP directly.
  • Get yourself a Helm; store your email in the comfort of your home without the hassle of setting and maintaining your own server. It does require setup and maintenance via a mobile app, uses Docker containers internally and is comes in at 299 USD and 99 USD/year from the second year site.

Closing thought

Over the next weeks weeks, I'll be writing more articles and insights into liberating your mailbox, hosting your own server and reclaiming your inbox. Feel free to ask me for help, via mail (prefer to mail with non-Gmail addresses, haha) hello@h3artbl33d.nl, via Twitter or Mastodon.

Statistics about mailserver/mx usage come from securitytrails.com

 
Read more...

from Poems

I found you with these eyes exploring your perfection everything identified aren't you just perfection every little record eager to obtain later recollected I fall in love again

everything just perfect everything's alright oh, that day... the day I lost my sight

now given another vision now given another view this one is more empty this one can't see you

 
Read more...

from Poems

It's to late Let to fate A retrospective request Please slow down I've not done my best Growing years Growing fears Losing energy Time will pass watching the colour in my life turn glass I wanna go home I've been gone so long That moment at the end of a false smile They'll see I've been gone too long

 
Read more...

from steve

Smartphone manipulation

Smartphones and Social Media are ruining our lives, say the press. It's not smartphones or social media itself. It's that we're biologically unprepared for how to deal with them. Smartphones and Social Media are designed to hijack the attention and reward centres of the brain. They do it so well we don't notice how our brains are being altered.

In this post I write about how I changed my Smartphone use, and how this helped me reclaim my time and my attention span. It's about my ability to be present in the moment. Something I lost, then regained.

I won't talk about everything I do. Instead, I'll focus on things you can do. There's no talk of compiling your own firmware, mainlining F-droid or micro-g here. That's for some other time.

Instead, this post is about what I did that you can do without changing your phone.

I never felt more connected to friends through my phone, but so absent in their presence.

How Things Got Out Of Control

A few years ago, I had an iPhone 6. It was the digital tool I used more than anything else. If you could think of a pointless app, it'd be on there. When a notification came I'd hear a noise. The screen would light up, and, in pavlovian style so would my neurons. My phone went to bed with me, it woke up with me, it went to work with me.

The phone takes over our lives, like boiling a frog

I started to find that I was getting less happy. I felt less able to concentrate. I never felt more connected to friends through my phone, but so absent in their presence. My attention span shrivelled. I couldn't watch whole films. Reading books was impossible. In pockets of free time I'd check Facebook, Twitter, Instagram, Email ad nauseum. I've missed buses and trains because I was so absorbed in something I don't even remember reading. I had forgotten boredom. There was no time for my mind to wander. I became an angelheaded hipster, burning for the ancient heavenly connection to the starry dynamo in the machinery of night.

I lost the buzz of low-effort connection, but gained the ability to connect with purpose.

The Flashpoint

When Apple pulled the plug on the headphone jack, I realised my time with Apple's products was over. I didn't want to jump from Apple's walled garden into Google's. Instead I tried to degoogle my life (which is definitely another post in itself).

In the process I found ways to make my phone work for me rather than against me. I found a whole new world of ethical social media. So far, I've gained happiness, time and space for myself. I lost the buzz of low-effort connection, but gained the ability to connect with purpose.

I wanted a sustainable phone experience. This isn't a minimalist experience. This isn't a phone pared back to the basics. It's a phone experience that works for, not against me. Everyone's sustainable phone experience is different. It's a journey, not a goal. A journey I encourage readers to travel.

Stage 1: Do Not Disturb

I'm not waking this pupper up, and neither is my phone

The first thing I did was reduce the volume and timing of notifications I receive. One of the best features on both Android and iOS is Do Not Disturb. This isn't enough alone, but combined with sane rules makes the break between you and your phone.

Since the early days of Blackberry, people were chained to notifications. Notifications have many problems, the worst of which is the impact on sleep. Do Not Disturb helps you take back your sleep. It also lets you take back your time.

Here's how I use my Do Not Disturb settings:

  1. No calls, messages or notifications from 8pm – 10am
  2. Notifications from Tusky, Signal QKSMS and calls on Saturday and Sunday daytime
  3. Exceptions for specific contact groups over specific services

Using contact groups to manage exceptions lets family and friends reach you in your own time. Calls also come through if someone calls 3 times in 5 minutes. This works on both iOS and Android.

Stage 2: Notifications

Putting the no in No-tifications

I also restrict notifications. I restrict which apps can send notifications. I restrict when apps can send notifications.

On my iPhone I used IHG's app to book hotels. The app used notifications to update me about bookings. It also advertised to me. Many apps use notifications for adverts. This doesn't happen on my current phone.

The only apps that can trigger notifications on my phone are:

  • Tusky for Mastodon notifications
  • Mail notifications
  • Calendar notifications
  • Signal Messenger for messages
  • App update notifications from F-Droid and Yalp

For everything else, I can check the app when I feel like it.

Imagine caring what people you barely know are up to while the most amazing person in the world lies next to you.

Stage 3: Reduce Interaction

Oh god, no.

A major try to get social media to notify me by email. This increases the amount of steps needed to respond. I use a dedicated mail account for low-value mail such as notifications and sign-ups. I now respond to notifications on my own time, not when a light pops up.

If I have an email notification, the app will still show it as unread when I visit. I set aside time to use social networks. I try to use them with purpose instead of passively scrolling through every 15 minutes.

Marizel and I noticed we used our phones when we woke up, and used them in bed before sleeping. Imagine caring what people you barely know are up to while the most amazing person in the world lies next to you.

Our phones stay outside of the bedroom now. In fact we use no technology in the bedroom beyond a light and a radiator. The room is now only used for about 3 things, none of which need complex technology.

Stage 4: Trimming Apps

Wanna see my home screen?

My home screen

I deleted Facebook in light of it's continuous commitment to violating privacy. The Cambridge Analytica scandal was the last straw for me. I understand that for many people that's not an option. For example, I'm still a heavy twitter user but it often makes me sad. That's why I keep it off my home screen.

Reducing the amount of apps I have helped a lot. Most of the time, you don't need an app. I started by removing apps I hadn't used in 6 months. I removed apps that had functioning mobile sites and bookmarked them on my home screen. I switched to lighter non-official social media clients that didn't bug or track me.

There are alternatives that will help you get your time back. If you can't delete Facebook, remove the app from your phone. If that's too much, replace it with a dedicated browser app only used for Facebook. Set Facebook's mobile site as the home page in that browser. Bonus points if your dedicated browser app supports ad-blocking.

If you find social media makes you angry or upset, consider consider using it from your Laptop only. Laptops tend not to stay online, unlike phones and tablets. Using a laptop requires a conscious decision to engage instead of a passive default. You can still catch up with friends and family on Facebook, but need to make a little effort to do so. Friction is the best tool to control social media control use.

Setting up ad-blockers on a laptop is often easier than on a phone. Having said that, there are great apps like Better that are worth looking at. Android Firefox supports add-ons on mobile, such as uBlock Origin.

Stage 5: Seasonal Cleaning

It's a journey, not a destination

To keep things light, I created a 3 month folder on my home screen. Every month I go through my installed apps. If I haven't used an app that month, it goes in the 3 month folder. This means the app is on my home screen but not taking up space.

If I use it, it comes out of the drawer and off my home screen. If I don't use the app in 3 months, I uninstall it. This keeps my phone light, quick and clean.

I'm pretty brutal about my home screen. My wallpaper is black, I use dark mode where I can and I keep the screen brightness low. I have 11 icons on my home screen, along with two folders:

  • The 3 months folder discussed earlier
  • A folder named “Don't”.

The Don't folder holds apps I want to use less. Don't doesn't mean, “Don't use this”. It means “Don't make this your default action”. In my Don't folder currently, I have the following apps:

  • Red Reader
  • SimplyWall.st
  • Tusky

Once I feel my relationship with an app is back on track, I take it out of don't and decide where to put it next. If it doesn't improve, I'll consider removing it. I don't have to remove it. I just have to make an active decision about that app's future.

As I mentioned, my wallpaper is black, but I've found some great options for lockscreens.

An Aside: Kinder, Gentler Social Media

Ethical Social Media exists. You should try it

You might've wondered what Tusky and Mastodon are. Well, I used to use Facebook, Twitter and Instagram. I find that these apps would encourage me to vomit thoughts, argue with people or share things that upset me. I decided to find alternatives, and I'm glad I did.

I use Mastodon as a much happier alternative to Twitter. Mastodon is a bit like a friendlier, happier twitter. It's not the same, but that's a good thing. Instead of Instagram I use Pixelfed but that's still new, so I'm waiting for an Android app. For writing I use writefreely. You're using it now to read this.

These applications are all part of something called the Fediverse. It's a non-commercial, open way of sharing with each other. Nobody's incentivised to get you to like or share. Likewise, nobody's incentivised to like or share your stuff. These spaces tend to be smaller and sometimes less active, but are way healthier.

Ethical social media is less invasive. It avoids the dopamine-feedback loop you get with commercial networks. People can still contact me via social networks on Mastodon and Pixelfed. Of course, there are plenty of options for email.

Stage 6: Making Social Media an Active Choice

There are no wrong answers, just take the time to choose

I've got rid of most of the more evil social media around, how do I reclaim my life? Well, I start by setting particular times to use social media. I check social media on my phone mostly at the start of the day, and about an hour before bed. The rest of the time it needs to be a conscious decision to use it on my laptop.

It takes time to reclaim your attention span. I've found Kindles to be amazing devices for this. I just wish I could find a more open alternative that did what I wanted. I've also found little things to reclaim my attention span.

Instead of using a phone when I get up, I try to make sure Marizel is the first thing I see. If I'm up first I'll spend a few minutes watching her sleep. Sometimes I think about random things. Other times I just watch her. I find this helps me focus on what's important.

I usually make us coffee first thing in the morning and I'll look out of the kitchen window while the kettle boils. It's not an amazing view, but the phone stays in the living room. It gives me time every day for my mind to wander. It's only 5 minutes while I wake up, but it makes a real difference to my perspective.

Final Thoughts

The biggest thing I've had to accept is that this is a work in progress. Sometimes I'm going to fail. I'm going to get into arguments on twitter. I'm going to spend too much time on an app for no good reason. There will be times when I'm physically with people, but mentally absent. It's ok. What's important is that I recognise it, and try to stop it happening next time.

But in a life surrounded by bells and flashing lights I can find the time to be present with those I care about. That's worth more than all the likes and shares in the world.

 
Read more...

from h3artbl33d

Email

I have been running an amount of mailservers for the past years – mainly for my firm (as an entrepreneur). A small part is personal, eg, h3artbl33d.nl runs it's own mailserver.

Over the time, I outgrew the scenario where a single (or two, with a fallback) server is feasible. Rather than throwing more resources on it, or moving to a more powerful server, I deliberately chose to add additional servers. Not only does this help in setting up a more resilient mail infrastructure, segmentation also benefits security.

In a very early stage, I implemented technologies like SPF, DKIM and DMARC. Most likely, those abbreviations do ring a bell. If not, here is a small explanation:

  • SPF is a technique used on DNS records, it's basically a list of the mailservers that are allowed to send mails from a certain domain.
  • DKIM adds encryption on top of that. It allows to verify whether the sender is allowed to send from a particular domain, by using public key cryptography.
  • DMARC is the newest addition, it not only adds another layer of sender verification, but also handles what action should be taken once a sender fails verification and to whom it should be reported.

These three techniques are a tremendous help in mitigating spoofing. Let's take my domain as an example: h3artbl33d.nl.

If SPF, DKIM and/or DMARC aren't setup at all, anyone could spoof that domain and portray to be me – eg, use hello@h3artbl33d.nl as the sender.

This goes for virtually any domain. Eg, without these techniques and some provider-level filtering, anyone could spoof messages as if they were sent from Microsoft.com, NSA.gov, Whitehouse.gov, etc.


Occasionally, I like to experiment with technology. The same goes for email spoofing. In order to have some fun, I stripped an old, deprecated e-mail domain of SPF, DKIM and DMARC. Additionally, the domain I am referring to produces quite some hits on HIBP (Have I Been Pwned).

Next thing, I configured a catch-all on the domain – meaning every single address would be valid and routed to a single inbox – a “Pandora's Box” if you will.. This setup catches around 500 messages a day – all SPAM. The messages vary from offers of drugs on prescription, to SEO offers; from viagra to so called 'lost contacts'.

Sometimes, I start an effort to scam the scammers – mainly inspired by James Veich, by replying and actually spoofing like I was an actual victim.

Over time, I received quite a number of e-mails like this one:

Though the phrasing varies, but it always boils down to that the victim is supposedly hacked. The webcam was supposedly turned on, all digital activities were tracked and logged – including passwords, porn viewing, etc.

While it might be peanuts for a tech-savvy person to prevent or even see it's a scam in the blink of an eye, the same cannot be said for regular users. Heck, it might be really scary to receive such an e-mail.

To put it in perspective, I received a phone call last week, from an alerted customer, that received one of these e-mails. The respective customer does use an e-mail address supplied by the ISP that have a pretty shitty mailserver setup.

The thing that set off the alarm bells was the mention that the webcam was hacked – the customer in question doesn't have a webcam, so it was all sorted out pretty quickly. But nevertheless – receiving such emails can almost cause an heart-attack if you are not able to tell whether it's a scam.

The reason I am writing this blogpiece, is to raise awareness. If you are managing a mailserver – or if you know folks that do, please implement (or ask the person responsible to do so) SPF, DKIM and DMARC. It isn't something you likely do within five minutes for the first time – but having these techniques can save you from quite the headache!

Let's make the web great again!

 
Read more...

from h3artbl33d

Whether you are a pentester or do some occasional auditing, most likely you are familiar with Metasploit – or have heard of it. It's considered to be an essential tool for offensive security. I have always been a little stunned by the fact that Metasploit is often ran from Kali. Linux is far from secure; Kali takes this to the next level by running everything as UID 0 (root). Offensive and defensive security ought to go hand-in-hand. So, obviously, let's combine these two and install Metasploit on OpenBSD. Puffy for the win!

Preparing the dependencies

Metasploit has some dependencies that we have to install beforehand; it does needs these applications and settings in order to function correctly.

Ruby

Install Ruby 2.6 by issuing pkg_add ruby and choosing version 2.6. Upon succesfull installation, there is a notice shown that you can set some subapplications as the default version. Unless you are currently running Ruby applications – or intent do so so in the future, setting 2.6 as the default Ruby is safe. Execute these commands to set version 2.6 and it's subapplications as the system default:

doas ln -sf /usr/local/bin/ruby26 /usr/local/bin/ruby
doas ln -sf /usr/local/bin/erb26 /usr/local/bin/erb
doas ln -sf /usr/local/bin/irb26 /usr/local/bin/irb
doas ln -sf /usr/local/bin/rdoc26 /usr/local/bin/rdoc
doas ln -sf /usr/local/bin/ri26 /usr/local/bin/ri
doas ln -sf /usr/local/bin/rake26 /usr/local/bin/rake
doas ln -sf /usr/local/bin/gem26 /usr/local/bin/gem
doas ln -sf /usr/local/bin/bundle26 /usr/local/bin/bundle
doas ln -sf /usr/local/bin/bundler26 /usr/local/bin/bundler

PostgreSQL

Metasploit requires a database to store information. The recommended DBMS is PostgreSQL, with which I am pretty happy. Installing it is pretty straightforward: pkg_add postgresql-server.

Some additional configuration is necessary before running it:

su - _postgresql
mkdir /var/postgresql/data
initdb -D /var/postgresql/data -U postgres -A scram-sha-256 -E UTF8 -W
rcctl start postgresql

Now, we need to create a database and user to store everything in.:

psql -U postgres
CREATE DATABASE metasploit;
CREATE USER sploit WITH ENCRYPTED PASSWORD 'password';
GRANT ALL PRIVILEGES ON DATABASE metasploit TO sploit;
\q

Setting up Metasploit

In the previous steps we have prepared the dependencies, in this step we can setup Metasploit itself.

useradd -b /usr/local -m -s /sbin/nologin metasploit
doas -u metasploit git clone https://github.com/rapid7/metasploit-framework.git ~metasploit/app

More dependencies

Metasploit itself does need some Ruby 'gems' (extensions). Install them with:

cd ~metasploit/app
bundle install

Editing the database

Copy over the configuration and open it with your favorite editor, eg:

cp /usr/local/metasploit/app/config/database.yml.example /usr/local/metasploit/app/config/database.yml
vi /usr/local/metasploit/app/
chown metasploit:metasploit /usr/local/metasploit/app/config/database.yml

The configuration might speak for itself; if not you want to edit lines 9, 10 and 11:

  database: metasploit
  username: sploit
  password: password

That's it. Now you have setup Metasploit! Happy and safe pentesting!

 
Read more...

from OpenBSD Amsterdam

For OpenBSD Amsterdam we were looking for a lightweight method to keep track of, at least, traffic and CPU load generated by the vmm(4)/vmd(8) hosts.

We had some experience with Observium, which doesn't run well on OpenBSD, and LibreNMS. For some reason we were unable to get LibreNMS working on 6.4 nor on -current (6.5), so we decided to look elsewhere.

Considering our needs and what is available on OpenBSD we decided to go back in time and have a look at MRTG again.

Getting MRTG working with OpenBSD snmpd and collecting traffic is not a very big deal, cfgmaker is your friend! Getting CPU load was more of a challenge.

First we had to figure out what the SNMP OID were for the CPU, as the default ones, in the MRTG documentation didn't cover them. We also had to consider the multi-core machines we are running.

After some digging in the MIBS we found 'hrProcessorLoad' in /usr/local/share/mibs/HOST-RESOURCES-MIB.txt.

$ snmpctl snmp walk <host> community <string> oid hrProcessorLoad
hrProcessorLoad.1=24
hrProcessorLoad.2=57
hrProcessorLoad.3=33
hrProcessorLoad.4=26
hrProcessorLoad.5=21
hrProcessorLoad.6=25
hrProcessorLoad.7=77
hrProcessorLoad.8=68
hrProcessorLoad.9=61
hrProcessorLoad.10=54
hrProcessorLoad.11=24
hrProcessorLoad.12=50
hrProcessorLoad.13=0
hrProcessorLoad.14=0
hrProcessorLoad.15=0
hrProcessorLoad.16=0
hrProcessorLoad.17=0
hrProcessorLoad.18=0
hrProcessorLoad.19=0
hrProcessorLoad.20=0
hrProcessorLoad.21=0
hrProcessorLoad.22=0
hrProcessorLoad.23=0
hrProcessorLoad.24=0

With some Startpage/DuckDuckGo-fu we stumbled upon a script that pulled a specific OID and ran some calculations on the CPU load based on the total number of cores and the sum of the load across these cores.

Here is the heavily modified version of that script.

#!/bin/sh
test -n "$1" || exit 1
HOST="$1"
CPUINFO="/tmp/cpuinfo.${HOST}"

snmpctl walk ${HOST} oid hrProcessorLoad | cut -d= -f2 > ${CPUINFO}
CORES=$(grep -cv "^0$" ${CPUINFO})
CPU_LOAD_SUM=$(awk '{sum += $1} END {print sum}' ${CPUINFO})
CPU_LOAD=$(echo "scale=2; ${CPU_LOAD_SUM}/${CORES}" | bc -l)
echo "$CPU_LOAD"
echo "$CPU_LOAD"

It reads all the CPU information from the host and writes the load of each core in a temporary file in /tmp. The cores are counted and the sum is calculated. Since SMT / Hyper Threading is off by default, we excluded the cores which are not taking any load.

MRTG expects two values, it primairily operates in inbound and outbound traffic, we print the $CPU_LOAD twice. Job done! Not quite... It also expects the uptime to be presented in a readable format as well as the hostname.

So... TimeTicks here we come! To collect the uptime of an OpenBSD machine we need to query hrSystemUptime.

hrSystemUptime OBJECT-TYPE
    SYNTAX     TimeTicks
    MAX-ACCESS read-only
    STATUS     current
    DESCRIPTION
        "The amount of time since this host was last
        initialized.  Note that this is different from
        sysUpTime in the SNMPv2-MIB [RFC1907] because
        sysUpTime is the uptime of the network management
        portion of the system."
    ::= { hrSystem 1 }

The snmpctl command is:

$ snmpctl snmp get <host> community <string> oid hrSystemUptime.0 
0=15259107187

In order to get anything that resembles time we can read there are a number of calculations that need to happen.

15259107187 / 8640000 = days (+remainder) = 176.6107855324074
0.6107855324074 * 24 = hours (+remainder) = 14.65885277777778
0.65885277777778 * 60 = minutes (+remainder) = 39.53116666666667
0.53116666666667 * 60 = seconds.milliseconds = 31.87

Together with Roman Zolotarev we came up with the following part of the script:

TICKS=$(snmpctl snmp get ${HOST} oid hrSystemUptime.0 | cut -d= -f2)
DAYS=$(echo "${TICKS}/8640000" | bc -l)
HOURS=$(echo "0.${DAYS##*.} * 24" | bc -l)
MINUTES=$(echo "0.${HOURS##*.} * 60" | bc -l)
SECS=$(echo "0.${MINUTES##*.} * 60" | bc -l)
test -n "$DAYS" && printf '%s days, ' "${DAYS%.*}"
printf '%02d:%02d:%02d\n' "${HOURS%.*}" "${MINUTES%.*}" "${SECS%.*}"

Which results in 176 days, 14:39:31

The last part which MRTG expects is the hostname. This can be collected with:

snmpctl snmp get ${HOST} oid sysName.0 | cut -d= -f2 | tr -d '"'

All done!

What MRTG gets from the script is something like:

3.50
3.50
138 days, 02:37:03
server1.openbsd.amsterdam

The complete script can be found in our Git Repository.

You can follow us on Twitter and Mastodon.

 
Read more...

from steve

Now that my OpenBSD.Amsterdam VPS is up and running, and I have working backups, I thought I'd migrate some static sites over to this host and free up another dedicated server I'm using. Adding extra static HTML won't add to the VPS' general load and won't introduce new risks to #Chargen.One.

To do this, I need to implement name-based Virtual Hosting. I'm going to show how this is done for one site, hackingforfoodbanks.org, then build upon it for multiple hosts. Finally, I'll modularize elements of the configuration to make things more manageable, including HTTPS support.

To make Name-based virtual hosting work, it's necessary to update /etc/acme-client.conf, the DNS Records for the domain in question, and the nginx configuration.

Moving DNS

This is the simplest part of the job. It's simply a case of logging into a DNS provider, and pointing the relevant DNS records at the HTTP server. Log into the DNS provider or server, point the relevant 'A' and/or 'CNAME' records to the HTTP server's IP address, and be prepared to wait up to 24 hours.

Now DNS is out of the way, the next thing is to clean up the nginx config from earlier.

Segregating the Nginx config

The config as-is is fine for just hosting Chargen.One but could get a bit unwieldy if I move all of my static sites across. I created a subdirectory in /etc/nginx/ called sites, into which I can add server blocks for each site I want to host. This splits the configuration up into more manageable per-site blocks.

Before adding a new host, I split out the default chargen.one site config into a new file, /etc/nginx/sites/default.conf. This is a copy of the main /etc/nginx/nginx.conf site with everything from the openings server{ to closing } characters included. It looks like this:

server {
	listen       80 default_server;
	listen       [::]:80 default_server;
	server_name  _;
	root         /var/www/htdocs/c1;
	
	include acme.conf;
	
	#access_log  logs/host.access.log  main;
	#error_page  404              /404.html;
	
	# redirect server error pages to the static page /50x.html
	error_page   500 502 503 504  /50x.html;
	location = /50x.html {
	    root  /var/www/htdocs/c1;
	}
	
	# For reading content
	location ~ ^/(css|img|js|fonts)/ {
	        root /var/www/htdocs/c1;
	        # Optionally cache these files in the browser:
	        # expires 12M;
	}

	
	location ~ ^/.well-known/(webfinger|nodeinfo|host-meta) {
	    proxy_set_header Host $host;
	    proxy_set_header X-Real-IP $remote_addr;
	    proxy_set_header X-Forwarded-For $remote_addr;
	    proxy_pass http://127.0.0.1:8080;
	    proxy_redirect off;
	}
	
	location ~ ^/(css|img|js|fonts)/ {
	    root /var/www/htdocs/c1;
	    # Optionally cache these files in the browser:
	    # expires 12M;
	}
		
	location /{
	    proxy_set_header Host $host;
	    proxy_set_header X-Real-IP $remote_addr;
	    proxy_set_header X-Forwarded-For $remote_addr;
	    proxy_pass http://127.0.0.1:8080;
	    proxy_redirect off;
	}
}

# HTTPS server
#
server {
	listen       443 default_server;
	server_name  _;
	root         /var/www/htdocs/c1;
	include /etc/nginx/acme.conf;
	
	ssl                  on;
	ssl_certificate      /etc/ssl/chargen.one.fullchain.pem;
	ssl_certificate_key  /etc/ssl/private/chargen.one.key;
	ssl_session_timeout  5m;
	ssl_session_cache    shared:SSL:1m;
	ssl_ciphers  HIGH:!aNULL:!MD5:!RC4;
	ssl_prefer_server_ciphers   on;
	
	location ~ ^/.well-known/(webfinger|nodeinfo|host-meta) {
	    proxy_set_header Host $host;
	    proxy_set_header X-Real-IP $remote_addr;
	    proxy_set_header X-Forwarded-For $remote_addr;
	    proxy_pass http://127.0.0.1:8080;
	    proxy_redirect off;
	}
		
	location ~ ^/(css|img|js|fonts)/ {
	    root /var/www/htdocs/c1;
	    # Optionally cache these files in the browser:
	    # expires 12M;
	}
		
	location / {
	    proxy_set_header Host $host;
	    proxy_set_header X-Real-IP $remote_addr;
	    proxy_set_header X-Forwarded-For $remote_addr;
	    proxy_pass http://127.0.0.1:8080;
	    proxy_redirect off;
	}
}

With that entire block removed from the main config, below the line server_tokens off;, there's just the following remaining in /etc/nginx/nginx.conf:

include /etc/nginx/sites/*.conf;

If I want to disable a site, I change the file extension from .conf to .dis and restart nginx. That way I can easily see which sites are enabled and which sites aren't without having to mess with the ln command or symbolic links.

Adding a new virtual host

The first host is the hardest, but onces up and running provides a template for any future hosts. I keep things fairly minimal, but adding support for PHP-based sites is as simple as copying from the default OpenBSD nginx config. The TLS config still points to the chargen.one certificate as only the certificate's associated hostnames change, not the filename.

    server {
        listen       80;
        server_name  hackingforfoodbanks.org www.hackingforfoodbanks.org;
        root         /var/www/htdocs/hackingforfoodbanks;

        include /etc/nginx/acme.conf;
        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root  /var/www/htdocs/hackingforfoodbanks;
        }

        location / {
            try_files $uri $uri/ =404;
            # Optionally cache these files in the browser:
            # expires 12M;
        }

    }

    # HTTPS server
    #
    server {
        listen 443;
        server_name  hackingforfoodbanks.org www.hackingforfoodbanks.org;
        root         /var/www/htdocs/hackingforfoodbanks;
        include /etc/nginx/acme.conf;

        ssl                  on;
        ssl_certificate      /etc/ssl/chargen.one.fullchain.pem;
        ssl_certificate_key  /etc/ssl/private/chargen.one.key;

        ssl_session_timeout  5m;
        ssl_session_cache    shared:SSL:1m;

        ssl_ciphers  HIGH:!aNULL:!MD5:!RC4;
        ssl_prefer_server_ciphers   on;

	location / {
	    root /var/www/htdocs/hackingforfoodbanks;
	    # Optionally cache these files in the browser:
	    # expires 12M;
	}

}

The only major differences are the removal of default_server in the listen directives, the changes to server_name and root to point to the correct spot and the removal of all of the dynamic parts associated with Chargen.One. Check whether or not there are problems with the nginx config before restarting by using the following command:

nginx -t -c /etc/nginx/nginx.conf

Providing the syntax is ok, restart nginx with rcctl restart nginx as root, or via doas.

Adding domains to acme-client

The final part of the puzzle is to add LetsEncrypt support for the new domain. The easiest way to add domains to acme-client is through the alternative names feature. Here's what I've added to /etc/acme-client.conf in order to support the hackingforfoodbanks.org URL.

alternative names { hackingforfoodbanks.org www.hackingforfoodbanks.org }

After adding that, and deleting the existing /etc/ssl/chargen.one.crt file, acme-client can be called to add the new domain.

rm /etc/ssl/chargen.one.crt
acme-client -vFAD chargen.one

Note that the alternative names for our new domains are under the chargen.one domain section. The domain section name is passed to acme-client, not the domain itself.

With a fully functioning certificate and nginx setup, run rcctl restart nginx to finish things off, and test the new site in a browser.

Adding HTTPS redirects

You might want to redirect some of your sites to HTTPS rather than serve a HTTP version of your site. While often touted as a panacea, this introduces a mix of advantages and drawbacks.

  • The content being delivered will be wrapped in transport layer encryption, making it harder for someone eavesdropping to identify the content being transferred (confidentiality)
  • As the transfer is encrypted, it becomes hard to interfere with the content.
  • HTTPS relies on a routinely (temporarily) broken permission-based model, often abused by companies and nation states. Thus while it's useful, it shouldn't be relied on for bulletproof 100% security.
  • Currently support TLS versions used in HTTPS aren't supported by most browsers available for legacy Operating Systems such as Windows XP. This means your site may be inaccessible over Windows XP and older versions of Android.

I'm not saying don't use HTTPS for a static site. There is no harm in supporting both, especially for a static web site. Just consider the site's audience and make a reasoned, deliberate decision as to whether or not to support accessing your content over HTTP before proceeding.

This site is accessible over HTTP and HTTPS precisely so users of older systems can still access the content via the reader, but authenticated access only works over HTTPS, and no mixed content is loaded.

As people accessing hackingforfoodbanks.org may not have access to current technology (e.g. foodbank users), I made a conscious decision to leave HTTP access open. For another site, rawhex.com, there's less of a requirement to leave HTTP access open, so I'll redirect that to HTTPS.

It's always annoying when a doc doesn't show the whole config for something complicated, so here's the /etc/nginx/sites/rawhex.conf file in full:

    server {
        listen       80;
        server_name  rawhex.com www.rawhex.com;
        return 301 https://$server_name$request_uri;

    }

    # HTTPS server
    #
    server {
        listen 443;
        server_name  rawhex.com www.rawhex.com;
        root         /var/www/htdocs/www.rawhex.com;
        include /etc/nginx/acme.conf;

        ssl                  on;
        ssl_certificate      /etc/ssl/chargen.one.fullchain.pem;
        ssl_certificate_key  /etc/ssl/private/chargen.one.key;

        ssl_session_timeout  5m;
        ssl_session_cache    shared:SSL:1m;

        ssl_ciphers  HIGH:!aNULL:!MD5:!RC4;
        ssl_prefer_server_ciphers   on;

        # add HSTS header to ensure we don't hit the redirect again
        add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;


        location / {
            root /var/www/htdocs/www.rawhex.com;
            # Optionally cache these files in the browser:
            # expires 12M;
        }

}

The HTTP 301 redirect shrinks the rest of the block to almost nothing. The HSTS header is a way to ensure that once redirected, a browser will only make requests over HTTPS, even if the user clicks on a HTTP link. The end result is an A+ score from Qualys' SSL Labs. There are things that can be done to improve the score, but these come at the cost of compatibility with older browsers and Operating Systems such as Windows Vista and 7.

Modularizing further

You might've noticed in the above that I'm repeating a lot of SSL settings. For HTTPS sites, it's best to keep things consistent. As such I've moved my ssl settings (aside from HSTS) into a separate file, /etc/nginx/https.conf. This means I only have to change one file for all HTTPS site configs. The current version of my file looks like this:

        ssl                  on;
        ssl_certificate      /etc/ssl/chargen.one.fullchain.pem;
        ssl_certificate_key  /etc/ssl/private/chargen.one.key;

        ssl_session_timeout  30m;
        ssl_session_cache    shared:SSL:2m;

        ssl_ciphers  HIGH:!aNULL:!MD5:!RC4;
        ssl_prefer_server_ciphers   on;

I set a higher SSL Session timeout and cache for performance purposes. People should be able to use a single SSL session to cover a full visit to and around the site. People rarely spend longer than 30 minutes there unless they leave a tab open, at which point I'm happy to reinitialize.

Please don't confuse SSL Sessions with HTTP or application sessions. They're different things. If in doubt, the defaults are probably fine.

Now all I have to do is add include /etc/nginx/https.conf; below include /etc/nginx/acme.conf; to my sites, and any changes to ciphers or timeouts will be picked up systemwide with a single change.

Conclusion

Now that I can add static sites to the Chargen.One system, I'll migrate the rest of my content over. With a clean, modular nginx config, content is served speedily and thanks to OpenBSD, to a level of security I'm comfortable with. I still need to find somewhere to move my git repos to, and I'm not sure chargen.one is right for that, but roman has a few ideas that I might borrow from.

 
Read more...

from steve

Sometimes I miss things on the Internet the first time round. I'm not aware they were things until I stumble across them randomly some time later. A week ago I came across the writing of Bronnie Ware, a pallative nurse in Australia who in 2009, documented the 5 most common regrets she encountered from the dying.

The regrets themselves weren't surprising. I've encountered all of them at some point. What surprised me was that the ones I'd encountered were the common ones. In her post, the regrets of the dying, Bonnie lists 5 regrets she commonly encountered from her patients:

  1. I wish I’d had the courage to live a life true to myself, not the life others expected of me.
  2. I wish I hadn’t worked so hard.
  3. I wish I’d had the courage to express my feelings.
  4. I wish I had stayed in touch with my friends.
  5. I wish that I had let myself be happier.

Bonnie went on to write a book about this, The Top Five Regrets of the Dying – A Life Transformed by the Dearly Departing. Her blog post touched a lot of people at the time, including YCombinator founder Paul Graham.

Graham viewed at these regrets as errors, in this case of omission. Not everyone has the opportunities Graham has had in life, and while I can understand his rationale, I'm not sure I agree with it. I believe these regrets have an internal element, but rarely arise in a vacuum.

I looked for signs of these regrets in myself and those around me. I found examples everywhere amongst my neighbours, family and friends:

  1. The regret of the women who married the man who knocked them up because it was expected at the time.
  2. The regret of the men who try so hard to provide for their children they never get to grow close to them.
  3. The gay men and trans women who attempted suicide because they couldn't reconcile their identity with their devout cultural or religious beliefs.
  4. The old man living alone in his house, slowly forgetting everything and everyone he knew.
  5. The women who spend their lives looking after everyone else, barely, if ever making time for herself.

I've experienced all 5 forms of Bronnies' regrets. Thankfully I've always had the ability to do something about it. I don't pretend that others have that capability. In fact I doubt most people are aware of these regrets until long after they've formed.

What I can do, is when I see this in others is be kind, be patient, encourage them to open up, and to listen. But I felt I should find a way to identify the first signs of these regrets in myself.

Graham inverted the regrets to create a list of 5 commands, but I found these to be very negative. Perhaps that's ok for him. It's not really for me. Instead, I chose 5 questions to periodically ask myself. Their purpose is to help me become more mindful of things that make me sad:

  1. Have I been authentic throughout the month, or was there a moment where I became a version of me to meet the expectations of others?
  2. Have I made enough time and space this week to be with those I love?
  3. This week, have I been continuously open and honest with myself and those around me?
  4. When did I last talk about something other than work to those I really care for?
  5. What did I do for me this week?

I've put these 5 questions up here, so I can check in on myself now and again. I also have them in a notes folder so I can go through the list once a week.

If the answer to a question is no, I make a note in Joplin about why the answer is no, and what I'll do to address it. It's ok for there to be a no response to a question, but I should at least make a conscious decision about it when it arises. There are no wrong answers, the thinking alone is often enough to kick me into gear.

My hope is that by asking these questions regularly, I can avoid things before they become regrets, instead of fixing them later on. That way, whenever it's time to die, I can do so with no regrets.

 
Read more...

from steve

*The approach and scripts discussed here use mysqldump to back up a database at one point. Yes I know this isn't in OpenBSD base, but this was added just for a specific system. It's easy not to add it, and everything else is done through tools from the base system.

With any system, it's important backups and restores work properly. With #Chargen.One, I wanted to protect user data, and be able to easily restore. The important things for the backup were:

  • Single timestamped backup file
  • No additional software installed above what's already on the box
  • Backups are pulled from a central server, not pushed to one
  • Portable script
  • Use privilege separation so local accounts can't access backups

Most services like borg backup work best with a backup server visible from the server being backed up. I use a huge NAS to store my backups, and a separate server to store backups of backups. The NAS is behind a Firewall and the other server can see the NAS but not the Internet. As such, I need a backup system that lets me use the NAS to pull from Chargen.One onto the NAS, and then allow my isolated backup server to pull from the NAS. If that sounds a little paranoid, at least you understand why I use OpenBSD.

Backups are taken by root on a nightly basis, and put into a folder belonging to a dedicated backup user. Early in the morning, the backup is pulled from a system which deletes the backup from chargen.one. The remote backup system will store 30 days' worth of backups.

Configuring a backup account

The /home partition is the largest on the VPS, so I created an account to store backups in a temporary folder, create an archive, delete the temporary folder and change permissions so the remote backup system can pull and remove the backup archive.

To set up the backup user account, use the following commands (as root):

# useradd -m backup
# chmod 700 /home/backup
# su - backup
$ cd .ssh

On the backup system, generate an ssh keypair using ssh-keygen -t ed25519. Copy the contends of id_ed25519.pub from the backup system into /home/backup/.ssh/authorized_keys on the server being backed up.

SSH into the backup account on the server from the NAS to make sure everything works.

Each backup archive is created by a script that creates and stores content in /home/backup/backup/. Once backed up, the script will create a timestamped archive file and delete the /home/backup/backup/ directory. The script starts off very simply:

#!/bin/sh

mkdir /home/backup/backup
# Add stuff below here


# Don't add stuff below here
rm -rf /home/backup/backup

Backing up MySQL data

If you want to implement my backup scheme and don't run MariaDB or MySQL, then skip this section and backup using commands from base only.

Because MySQL is configured to use passwords, a /root/.my.cnf file containing credentials for the mysqldump command is needed.

[mysqldump]
user=root
password=your_password_here

The mysqldump command fully backs up all mysql databases, routines, events and triggers.

Add the following to the backup script (all one line):

mysqldump -A -R -E --triggers --single-transaction  > /home/backup/backup/mysql.gz

The --single-transaction option causes the backup to take place without locking tables.

Backing up a package list

OpenBSD uses it's own package management system called pkg. To create a backup of installed packages add the following to the backup script:

pkg_info -mz > /home/backup/backup/packages.txt

This can then be restored from a backup using pkg_add -l packages.txt.

Backing up files

The following files and directories should be backed up:

  • /etc
  • /root
  • /var/www
  • /var/log
  • /var/cron
  • /home, excluding /home/backup
  • /usr/local/bin/writefreely
  • /usr/local/share/writefreely

Use the tar command to create backups. A discussion of the tar command is best left to man tar, but as the backup isn't very large, I'm not using incremental backups, which keeps things simple...up to a point.

OpenBSD's tar implementation doesn't add the --exclude option as it's a GNU extension. Other BSDs such as FreeBSD do add the option, but the OpenBSD team prefer not to have it. I could've added the GNU tar package, but one of the stated goals of the script is to not require additional software to keep things portable. Paths such as /home/backup are excluded using shell expansion instead.

To test this, try the following command:

# tar cvf bk.tar /home/!(backup)

The exclamation mark means exclude anything in the parentheses. For multiple directories, separate the names with a pipe symbol, e.g. !(backup|user) to exclude both backup and user directories.

There are complications and error messages will be shown on each backup if absolute paths are added to the tar command. This means that an email would be generated every night, even if the backup succeeds. Ain't nobody got time for that.

As a workaround, changing to the root directory at the start makes all paths relative, and allows the shell expansion to work. The -C switch can be used instead, but this breaks shell expansion.

The final commands to go in the script look like this:

cd /
tar cf /home/backup/backup/files.tar etc/!(spwd.db) root \
	var/www var/log home/!(backup) /var/cron \
	usr/local/bin/writefreely usr/local/bin/backup.sh \
	usr/local/share/writefreely 

I've used backslashes to break up the lines for readability, but all the paths could be put on a single line if preferred.

I've excluded /etc/spwd.db from the backup because OpenBSD's built-in tar uses a feature called pledge that restricts access to certain files. The file isn't particularly important to this specific backup, but contains the shadow password database, which I'm happy to recreate as part of the restore process.

At this point you might wonder why gzip compression isn't being used in the tar archive. This is because the final archive will be compressed, and there's no point in compressing twice.

Creating the final archive

To distinguish between backups by date, I used a timestamp, generated by the date command. By default there are spaces and colons, neither of which are good for interoperability across Operating Systems and filesystems. Use date +%F_%H%M%S to generate a more reasonable format. Using tar's -C switch changes the tar working directory to /home/backup and stops a leading / error message appearing in the backup.

The final tar command in the backup script should look like this:

tar zcf /home/backup/c1_$(date +%F_%H%M%S).tgz -C /home/backup backup

It's also important to change the file ownership to the backup user so the remote system can delete the backup after it's been created.

chown backup:backup /home/backup/c1_*

The full backup script on chargen.one looks like this:

#!/bin/sh

mkdir /home/backup/backup
# Add stuff below here

# MySQL Backup
mysqldump -u root -A -R -E --triggers --single-transaction | gzip -9 > /home/backup/backup/mysql.gz

# Packages backup
pkg_info -mz > /home/backup/backup/packages.txt

# Files backup
cd /
tar cf /home/backup/backup/files.tar etc/!(spwd.db) root \
	var/www var/log home/!(backup) /var/cron \
	usr/local/bin/writefreely usr/local/bin/backup.sh \
	usr/local/share/writefreely 

# Final archive
tar zcf /home/backup/c1_$(date +%F_%H%M%S).tgz \
	-C /home/backup backup

# Fix permissions
chown backup:backup /home/backup/c1_*

# Don't add stuff below here
rm -rf /home/backup/backup

Automating the backup

As root (via su -, not doas), use crontab -e and add the following entry:

0 1 * * * /usr/local/bin/backup.sh

A new backup is created at 1am, every morning. On the remote server, a cron job calls a script at 3am to pull the backup down via scp using the following:

#!/bin/sh

find /Backups/c1/ -mtime +30 -exec rm {} \;
scp -q backup@chargen.one:./c1_*.tgz /Backups/c1/
ssh backup@chargen.one rm c1_*.tgz

And that's it! The secondary backup server pulls down the contents of /Backups from the NAS, so there's nothing left to do.

I'll write a separate post about restoring, as this post is already getting long, but hopefully it's useful to people who want pull, rather than push backups.

 
Read more...