Wahl-O-Mat - Bedienungsanleitung

Die Behauptung, man könne nur die Positionen von acht "großen" Parteien miteinander vergleichen, ist falsch.

Die Begrenzung des Vergleichs auf acht gleichzeitig anzeigbare Parteien ist meiner Meinung nach rein eine Frage der Ergonomie. Natürlich war es möglich, auch Kleinparteien in den Vergleich mit einzubeziehen, und man konnte auch prima mehr als acht Parteien miteinander vergleichen - nur nicht in einem Arbeitsgang.

Meine Empfehlung an die BPB ist, die ausführlichen Antworten aller Parteien in einem großen PDF zur Verfügung zu stellen, welches man dann im Bedarfsfall an die Wand projizieren oder ausdrucken könnte.

Nutzer von normalen Bildschirmen können die Positionen beliebiger Parteien ganz einfach vergleichen: Man wählt die Parteien, deren Positionen man miteinander vergleichen möchte, aus, und schaut sich deren Positionen an. Wenn man sich für die Antworten weiterer Parteien zu diesen Fragen interessiert, kann man mit der Rückwärtstaste des Browsers zu der Parteienauswahl zurückkehren und andere Parteien auswählen. Falls man Probleme hat, sich die Positionen alle zu merken, kann man ja rein theoretisch die Fragen und Antworten abschreiben, bevor man die Parteienauswahl verändert, um sich eine neue Auswertung anzeigen zu lassen.

Das Einzige, was man kritisieren kann, ist, daß in der Parteienübersicht die "großen" Parteien oben stehen, so daß es geringfügig "leichter" ist, diese auszuwählen. Aber wer des Lesens soweit mächtig ist, wie man das von einem Bürger, der eine Wahlentscheidung treffen soll, erwarten muß, der sollte in der Lage sein, in seinem Browser gegebenenfalls etwas weiter nach unten zu schauen, um die anderen Parteien zu finden. Bei mir werden auch die anderen Parteien teilweise schon angezeigt, und mein Browser teilt mir mithilfe eines Rollbalkens mit, daß ich nur einen Teil der Seite sehe. Ich nehme an, daß dies bei vielen anderen Internetnutzern ebenfalls geschieht.

Mozilla's Certificate Gaffe

The following rant is purely based on experience and speculation, and not based on a code review or peppered with other insights like design papers.

A few days ago I, like many other users of Firefox, noticed, that all the addons were disabled. A temporary workaround was already available on the Internet, but the whole episode made me think why Mozilla keeps periodically checking the signature of already installed software.

From my point of view, there is some merit to the idea that the software that a user is installing, has traceable origin. This is customarily achieved by code signing. The Debian project does this, and, I think, CentOS etc. are doing this as well by now. In the case of Debian, you download a keyring with "known-good" GPG keys, and the packages are signed with a key out of this set keys contained in the keyring. Third party vendors like eg. Google, Docker or Jenkins are using their own keys, of course. But once you have installed the software, no signature or key gets ever checked again, and the software continues to run indefinitely, as far as code signatures etc. are concerned. No potential for breakage due to operational conditions here.

This approach has the nice properties that it works offline - the basic set of keys is part of the installation media, that you can verify the keys at anytime when you want to install or re-install a package, and there is no data leakage to the outside world, since all operations work only locally.

Now look at Firefox:

It's not quite clear to me when exactly the signature checking occurs, but let's explore a these plausible cases:

  1. signature checking at install time
  2. periodic signature checking

Signature Checking at Install Time

If the signature of an add-on is only being checked at install time, then the only possibility that I can see why the add-ons are now all disabled, is that someone put out new versions of all add-ons I have installed, at the same time. And the same for every other user on the Internet. This is extremely unlikely. It's much more likely, that they are checking the signatures of all installed add-ons once they are trying to install a new (version of an) add-on. Since Firefox comes with auto-updates turned on for all add-ons, this would then occur at the earliest occasion when an update would be due to be installed.

But the proper handling of the problem is not to disable all add-ons where they can't check the signature - at least, there should be an easily accessible option to re-enable an add-on if they had disabled it. Unfortunately, by default, Firefox is quite terse in this area. A much more sane handling of the situation would be to not install the new add-on, but keep the old version running, unless overridden by the user.

Periodic Signature Checking

Another possibility is that Firefox would periodically check the signatures of all installed add-ons. That would be a privacy invasion, as it would leak usage data to Mozilla, which I'm not confident that all Firefox users would have consented to, if they would have been asked.

But reading the Mozilla.org website and their Twitter, where they said they would be posting updates, suggests that this kind of "nanny-state" behaviour would be perfectly in line with their general conduct, which appears to be more and more to replace proper software engineering with propaganda and coercion, frequently about things which are unrelated to, or contribute little, to the practical freedom, security and privacy of the actual users of their software. I also don't really understand how much of a problem that would be to issue a new certificate and have the better handful of add-ons which are available for the new version(s) of Firefox, automatically signed and upgraded as well. Instead, they are trying to cram their "studies" down the throat of Firefox users, which would basically surrender my browser to them for total remote control. This isn't going to fly with me, and I think I'm not alone in this regard. So, we are in day two/three after this event.

Summary and Recommendations

I don't know what they are really doing here, regarding the signature checking, but disabling them after they have been initially vetted, does raise a number of serious issues. As outlined above, there's the operational issue for the users. Then, there's the potential breach in privacy, depending on how exactly they do their signature checking.

I sincerely hope that they don't have only one certificate chain to sign the add-ons and the browser, but one obvious solution would be to bake a new certificate chain into the browser, sign all the add-ons with it, and then push the update out, so users could install the new certificate chain in a secure manner.

If the concern is that Firefox as a whole is too insecure to trust the add-ons to remain unchanged in the profile, once installed, then maybe one could think that having operating system packages for these add-ons to install them in the read-only area of the computer as an administrative user, eg. somewhere under /usr, and Firefox being unable to modify them itself, not such a bad idea after all. Also, there should be a way for the user to install third-party add-ons without the interference of Mozilla, without disabling the signature checking altogether. One way to do that would be to have additional search paths for third-party or user created add-ons, which could also be quite nicely outside of the reach of Firefox itsel, eg. somewhere under /usr/local. Although these examples are for Linux or Unix, I am confident that recent versions of Windows also have some ways to make it difficult for normal users to overwrite parts of the operating system.

And then there is also the general challenge of keeping the signing infrastructure and the process itself secure, as illustrated by this essay at SANS.

Generally, I think that Mozilla should not try to become first and foremost a pressure group with some software trailing behind, although I do agree with some of the purported ideas, but instead should focus on actually creating a usable browser which actually delivers on all the proclaimed privacy, secutity and other usability claims that they make.

Interesting link:

Upgrading Gitea Is Painful

I just wanted to upgrade a Gitea instance and, in the process, deleted old gitea binaries. After that, pushing to a repository did not work anymore, because the path to the gitea binary which is eg. used when the instance or a repository is being created, is being hard-coded into several files, and that file was just gone. So, in order to upgrade to a newer version of gitea, you have to do the following, assuming you run the service under the gitea user:

  1. In ~gitea/.ssh/authorized_keys, you have to adjust the path of gitea to the location of the new binary.
  2. Per repo, you need to adjust the path to the gitea binary in the following files:

    • hooks/post-receive.d/gitea
    • hooks/pre-receive.d/gitea
    • hooks/update.d/gitea

Maybe more files need to be changed, but at the moment, this seems to be enough to generally make things work again.

Links:

  • https://gitea.io

Why Do DigitalOcean, AWS & Co Not Default To Debian?

I just read Chris Lamb's platform[1] for this year's DPL elections, where he asks why enterprises do not use Debian by default. In this article, I want to give some answers, although I think Chris is very likely already aware of them, given his track record.

  • Marketing

    I think Chris is partially right: Marketing is important, whether we like it or not. The ArchWiki example that he mentions, shows that they manage to present relevant content in a very accessible manner. This has in part to do with their organization of the information, and also with them possibly keeping the information better up to date than we probably do (I frequently find better information in the ArchWiki myself.)

    Their styling is imho on par with ours, so the difference should lie elsewhere. This may be partially due to them using a different software which has a much bigger userbase than ours, which certainly does contribute to users findig it easier to work with, because they don't need to learn anything new - no new procedures, no new markup language, the software already feels familiar. In short, it conforms more to existing user habits because of the market share of that other software.

  • Commercial Viability

    In my professional experience, I found that there are a few factors which make other versions of Linux, particularly CentOS and friends, more attractive to enterprises like eg. AWS:

    • Our support cycle is too short.

      These enterprises like to have that 10 years of support and never worry about any ugprades, because after 10 years, you can usually safely throw the machine away. The impact is that the vendor, eg. Amazon, does not need to involve the customer about upgrading their application, which the customer usually does not want to do, and also does not allocate any budget to. The typical customer expects, that once he has his application deployed, it will continue to run without change until he decides to stop running that version of said software, and considers upgrades to be a waste of time an money. Also, both security updates and newer versions of some third-party software become available on older versions of such Linux systems without the need for a big upgrade. The former enables the vendor to say that his platform is secure, and that any breaches are solely the fault of the customer, while the latter enables the vendor to offer new features to the customer without requiring him to upgrade. As an example, I'd like to point to the availability of PHP7 on CentOS 6.8, which is from 2016, but does not deviate too much from even older versions of CentOS and thus require not too much re-learning, with their first 6.x version being released in 2011, alongside Squeeze.

      [2018-01] It looks like Snaps are addressing this problem.

    • As a corollary to that, there is a much clearer separation between the very small core distribution, and the large amount of third-party commercial software.

      Also, us having tons of software already included, which eat a lot of manpower, is an underemphasized, so it may not be obvious how Debian can make users' lives easier.

    • There is a certification system in place, that gives the enterprise some confidence about the abilities of any prospective hires. I am not aware of any certification system for Debian.

    • The boon and the bane of Debian is the non-commercial nature of it. There is no single commercial entity behind Debian, which results in enterprises not knowing whom to sue, or how long the project will survive. Nevermind that similar problems have occurred with many vendors in the past, but there is a vendor which could be sued, if need be. And it looks like they have enough government backing to not easily go bankrupt, either. But the distrust against volunteer organisations which are as loosely knit as Debian is, runs deep.

Links:

  • https://www.debian.org/vote/2017/platforms/lamby

Freie Software und das Militär

Ich lese oft und gern Fefes Blog, weil es einem eine Vielzahl von Nachrichten in aggregierter Form mit Links auf die Quellen zur Verfügung stellt, ohne daß man tonnenweise Reklame und Schlimmeres über sich ergehen lassen muß, aber eine Sache stößt mir schon lange auf: Bei jeder Gelegenheit fordert Fefe, daß die GPL um eine Klausel zum Ausschluß militärischer Anwendungen erweitert werden müsse.

Davon halte ich überhaupt nichts, und dem ist meiner Meinung nach ntschieden entgegenzutreten.

Begründung:

Zum Einen würde dadurch die Softwarelandschaft lizenztechnisch weiter zersplittert, und zwar in einer Art und Weise, die uns in die Zeit vor die Entwicklung der GPL zurückwerfen würde. Wenn die GPL nämlich um diese Forderung erweitert würde, käme der nächste Entwickler an und würde die Verwendung im Bereich Gentechnik, der Kirche, durch Autofahrer, Veganer, Farbige, oder wie auch immer einschränken wollen, und kaum eine Software wäre noch mit irgendeiner anderen Software kompatibel. Diese Art von Lizenzwirrwarr war vor der GPL üblich.

Wir haben ja jetzt schon Schwierigkeiten mit OpenSSL, jQuery und sicher noch einer Reihe weiterer Softwarepaketen, die Lizenzfragen aufwerfen bzw. eine Sonderbehandlung erfordern.

Dabei gibt es natürlich massive Abgrenzungsprobleme: Benötigt eine Fräse in einer Munitionsfabrik jetzt eine nicht-Fefe-GPLte Software, oder wäre ein derartiger Einsatz noch von einer derartig geänderten "GPL" gedeckt? Was ist mit Nähmaschinen für Schutzwesten? Für Uniformen? Was wäre, wenn die Bundeswehr die Software im Rahmen einer weiteren Flutkatastrophe zu Zivilschutzzwecken einsetzen will, oder wenn Widerstandskämpfer in Nordkorea (gibt es die überhaupt?) diese Software benutzen wollen, um gegen ihre Regierung vorzugehen? Was, wenn diese Widerstandskämpfer gerade gegen ein im Verhältnis deutlich liberaleres Regime vorgehen, wie etwa gerade im Nahen Osten, oder schon früher in Lateinamerika? Ich benutze hier in beiden Fällen das Wort "Widerstandskämpfer", um die politische Wertung aus der Frage herauszunehmen und die Diskussion auf die juristische Mechanik, so, wie sie sich mir als juristischen Laien darstellt, zu konzentrieren.

Dazu kommt, daß diese generelle Änderung unnötig ist, denn schon heute kann jeder seine Software nach der Art "GPL plus folgende Einschränkungen/Erweiterungen" lizensieren. Ein populäres Beispiel dafür ist die "OpenSSL-Ausnahme" oder die "Classpath-Ausnahme" (Wikipedia zum Thema).

Des Weiteren geht er von der Annahme aus, daß das Militär sich an eine derartige Lizenz halten müsse. Dagegen spricht jedoch alle Erfahrung im Hinblick auf staatliches Verhalten, speziell dann, wenn irgendwie der Themenkreis "nationale Sicherheit" berührt wird. Meiner Meinung nach ist davon auszugehen, daß alles, was diesen Leuten als genügend praktisch erscheint, im Zweifel ganz einfach requiriert wird, und daß sich kein Richter dagegenstellen wird.

Und zu guter Letzt sollte man auch den Aspekt der Selbstverteidigung nicht aus den Augen verlieren, denn nicht nur Fefe kann "militärische Anwendungen" definieren, die Staatsmacht kann das auch, wie wir schon bei der Auseinandersetzung um Verschlüsselung, und besonders um PGP/GnuPG, gesehen haben. Eine solche Fefe-Lizenz müßte demzufolge Klauseln beinhalten, die derartigen Versuchen einen Riegel vorschieben.

Aus meiner Sicht ist klar, daß die Staatsmacht und Unternehmen aus ihrem Dunstkreis die notwendige Erlaubnis quasi selbst ausstellen können, während etwaige nichtstaatliche Akteure wahrscheinlich keinen legalen Ersatz etwa in Form von QNX, finden können. Dabei sollte man bedenken, daß diese Konstellation, daß sich Bürger meinen, nur noch mit Waffengewalt gegen autoritäre Regierungen oder sonstige Angreifer zur Wehr setzen zu können, schon lange und in vielen Teilen der Welt, derzeit besonders deutlich etwa im Nahen Osten zu sehen, gegeben ist.

Dazu kommt, daß man etwaige Lizenzverletzer nur in extremen Ausnahmefällen verklagen können dürfte, wenn man denn der Lizenzverletzung gewahr würde, denn im Zweifel haben diese Personen(kreise) einfach mehr legale und physische Feuerkraft als der geneigte Softwareschmied.

Meiner Meinung nach sollte Fefe hier, wie bei anderen Themen auch, mit mehr Ratio und weniger Bauchgefühl an das Thema herangehen. Dann müßte er entweder seine Forderung fallenlassen oder zumindest erklären, warum nur militärische Anwendungen ausgeschlossen sein sollen - denn andere Anwendungen töten Menschen genausogut, nur nicht unbedingt genauso offensichtlich und spektakulär. Und er müßte meiner Meinung nach als politischer Mensch erklären, wieso sich diese Veränderungen in der Lizenzlandschaft gesellschaftlich positiv auswirken.

Links (Auswahl):

BT hijacks DNS queries

I just configured a new DNS name in one of my domains, which did not exist before. The associated IP number is routed to Germany. But while the name was not really up, the answer should have been NXDOMAIN, meaning that the name does not exist. Example:

$ dig blablablablabla.oeko.net

; <<>> DiG 9.9.5-8-Debian <<>> blablablablabla.oeko.net
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 38513
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;blablablablabla.oeko.net.      IN      A

;; AUTHORITY SECTION:
oeko.net.               139     IN      SOA     a.ns.oeko.net. hostmaster.oeko.net. 1021018254 16384 2048 1048576 2560

;; Query time: 10 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Thu Feb 12 21:33:53 CET 2015
;; MSG SIZE  rcvd: 105

But instead, they gave a fake answer:

$ dig bla.oeko.net

; <<>> DiG 9.9.5-8-Debian <<>> bla.oeko.net
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9013
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;bla.oeko.net.          IN  A

;; ANSWER SECTION:
bla.oeko.net.       20  IN  A   92.242.132.15

;; Query time: 32 msec
;; SERVER: 192.168.1.254#53(192.168.1.254)
;; WHEN: Thu Feb 12 19:55:14 GMT 2015
;; MSG SIZE  rcvd: 46
$

As a result, I am unable to check whether my DNS performed correctly, until they deceided to throw the fake answer away.

Of course, this has huge potential for censorship of all kinds, which I have seen in action elsewhere already. I am not the only person aggravated by this kind of behaviour. Please follow the link below to read other people's take on this problem.

Thank you!

Links:

  • http://linuxforums.org.uk/index.php?topic=11464.0

Typing Chinese on a Computer

Just today, I read an article about the influence of the computer on the chinese language. I can agree with some of the points of the author, but think that the difficulty of using a method like Wubi is generally overstated. CangJie is more difficult, but in contrast to spoken language, they both have the very valuable property of not changing according to dialect, region or time. The speedups a user of predictive input gains, are also avialable to users of handwriting or structure-based input methods, but the input speed should be excellent at 150 words, achievable in Wubi, or the 200 words achievable in CangJie. On top of predictive input and much less guesswork that makes the phonetic input methods slow, the structure-based input methods sport phrase books and rules for having hortcuts to type several characters in one go. And while I have seen every undergrad student using only PinYin or ZhuYin, every PhD student that I have met so far, has switched to Wubi, simply for the massive speed increase.

However, I am unconvinced about the notion that writing Chinese is slower than English:

If you can type 150 chinese characters per minute, that amounts to roughly 50 words per minute if you subtract particles and composita, as many chinese words have only one or two characters. Now, imagine how fast you'd have to type to achieve similar speed in English: If the average English word has four characters, which is probably not enough, you'd have to type at 600 characters per minute to achieve similar results, and then you have spacing, too, which does not exist in Chinese. I also hold that the structure-based input methods at least help you memorize the graphic elements of the characters, thus being closer to hand-writing than phonetic input methods. With the composition rules and phrase books, you end up usually having one to three key strokes to produce a chinese character. In summary, I think it is not easy to say whether English or Chinese can be typed faster.

Unfortunately, my own experience with Chinese input is limited to PinYin and Wubi, and as far as the steep learning curve goes, the principles of Wubi can be explained in probably one or three hours, and after that, it takes two weeks of practice to achieve some fluency. Not a big invest in comparison to learning Chinese in the first place, or the waste to be accrued over time using an inferiour method. I guess it is mostly the psychological barrier, possibly combined with unsophisticated didactics that contribute to the perception that these methods are hard.

Links:

Small Timezone Code Snippet

Today, I was looking at how to adjust a time stamp from a log file without a timezone info to contain the local timezone, so I can stuff a timezone aware value into a database. It turns out that this is a somewhat under-polished part of the Python standard library, at least as of Python 2.6, which I am using (don't ask why). While looking for a solution, I frequently came across code that used pytz , but I wanted something that would stay within the standard library.

So here's my hodgepodge solution to the problem, which should work in most of Europe:

import time

def getTimeOffset():
    offset = time.timezone
    if bool(time.localtime().tm_isdst):
        offset = offset - 3600
    stz = "%+02.2d%02d" % (offset / 3600, offset % 3600)
    return stz

This approach is a straightforward extension of the idea presented here.

New Blog Software, Links Changed

As you might have noticed, I have switched from MovableType to Pelican. As a consequence, the links in my blog changed - usually only a little, but in a slightly irregular fashion. Please peruse the archives and search for the title of your article. The content itself should all be there.

Thank you!

DNS: Open Resolvers, Revisited

Long has been the list of failures in ISPs and carriers to force borken DNS servers on their customers, thereby manipulating their customers traffic, or outright censoring what their customers can see. To combat such manipulations, and also to make it harder to observe their customers' behaviour, it has been a pet project for some, also for me at some time, to run an open resolver, that allows random people on the Internet to query your DNS server for an arbitrary name. Unfortunately, the evil guys developed an attack [0] that makes it impractical to run an open resolver. So, while politically desirable, it is unfeasible to run an open resolver, and network operators around the globe strive for shutting them down.

Now, these attacks all rely on the simple fact that, with UDP, you do not have any kind of assurance that the source address in a packet in fact belongs to the sending host. In my opinion, if you are willing to take the effort, there is one obvious way to provide an open resolver that does not have this flaw: For hosts not on your own network, provide DNS over TCP only.

I hope that someone will hack this feature into unbound [1], so people can easily deploy open resolvers in a reasonably safe way, without disrupting the Internet. Currently, unbound's do-udp setting is only a combined setting for incoming and outgoing queries, causing upstream name servers excessive load.

Thank you for reading!

[0] See eg. http://openresolverproject.org/
[1] https://www.unbound.net