Notebook (Posts about software)/categories/cat_software.atom2019-05-05T21:20:57ZToni MüllerNikolaMozilla's Certificate Gaffe/posts/mozillas-certificate-gaffe/2019-05-05T19:57:36+02:002019-05-05T19:57:36+02:00Toni Müller<div><p>The following rant is purely based on experience and speculation, and
not based on a code review or peppered with other insights like design
papers.</p>
<p>A few days ago I, like many other users of Firefox, noticed, that all
the addons were disabled. A temporary workaround was already available
on the Internet, but the whole episode made me think why Mozilla keeps
periodically checking the signature of already installed software.</p>
<p>From my point of view, there is some merit to the idea that the software
that a user is installing, has traceable origin. This is customarily
achieved by code signing. The Debian project does this, and, I think,
CentOS etc. are doing this as well by now. In the case of Debian, you
download a keyring with "known-good" GPG keys, and the packages are
signed with a key out of this set keys contained in the keyring. Third
party vendors like eg. Google, Docker or Jenkins are using their own
keys, of course. But once you have installed the software, no signature
or key gets ever checked again, and the software continues to run
indefinitely, as far as code signatures etc. are concerned. No potential
for breakage due to operational conditions here.</p>
<p>This approach has the nice properties that it works offline - the basic
set of keys is part of the installation media, that you can verify the
keys at anytime when you want to install or re-install a package, and
there is no data leakage to the outside world, since all operations work
only locally.</p>
<p>Now look at Firefox:</p>
<p>It's not quite clear to me when exactly the signature checking occurs,
but let's explore a these plausible cases:</p>
<blockquote>
<ol class="arabic simple">
<li>signature checking at install time</li>
<li>periodic signature checking</li>
</ol>
</blockquote>
<div class="section" id="signature-checking-at-install-time">
<h2>Signature Checking at Install Time</h2>
<p>If the signature of an add-on is only being checked at install time,
then the only possibility that I can see why the add-ons are now all
disabled, is that someone put out new versions of all add-ons I have
installed, at the same time. And the same for every other user on the
Internet. This is extremely unlikely. It's much more likely, that they
are checking the signatures of all installed add-ons once they are
trying to install a new (version of an) add-on. Since Firefox comes with
auto-updates turned on for all add-ons, this would then occur at the
earliest occasion when an update would be due to be installed.</p>
<p>But the proper handling of the problem is not to disable all add-ons
where they can't check the signature - at least, there should be an
easily accessible option to re-enable an add-on if they had disabled it.
Unfortunately, by default, Firefox is quite terse in this area. A much
more sane handling of the situation would be to not install the new
add-on, but keep the old version running, unless overridden by the user.</p>
</div>
<div class="section" id="periodic-signature-checking">
<h2>Periodic Signature Checking</h2>
<p>Another possibility is that Firefox would periodically check the
signatures of all installed add-ons. That would be a privacy invasion,
as it would leak usage data to Mozilla, which I'm not confident that all
Firefox users would have consented to, if they would have been asked.</p>
<p>But reading the Mozilla.org website and their Twitter, where they said
they would be posting updates, suggests that this kind of "nanny-state"
behaviour would be perfectly in line with their general conduct, which
appears to be more and more to replace proper software engineering with
propaganda and coercion, frequently about things which are unrelated to,
or contribute little, to the practical freedom, security and privacy of
the actual users of their software. I also don't really understand how
much of a problem that would be to issue a new certificate and have the
better handful of add-ons which are available for the new version(s) of
Firefox, automatically signed and upgraded as well. Instead, they are
trying to cram their "studies" down the throat of Firefox users, which
would basically surrender my browser to them for total remote control.
This isn't going to fly with me, and I think I'm not alone in this
regard. So, we are in day two/three after this event.</p>
</div>
<div class="section" id="summary-and-recommendations">
<h2>Summary and Recommendations</h2>
<p>I don't know what they are really doing here, regarding the signature
checking, but disabling them after they have been initially vetted, does
raise a number of serious issues. As outlined above, there's the
operational issue for the users. Then, there's the potential breach in
privacy, depending on how exactly they do their signature checking.</p>
<p>I sincerely hope that they don't have only one certificate chain to sign
the add-ons and the browser, but one obvious solution would be to bake a
new certificate chain into the browser, sign all the add-ons with it,
and then push the update out, so users could install the new certificate
chain in a secure manner.</p>
<p>If the concern is that Firefox as a whole is too insecure to trust the
add-ons to remain unchanged in the profile, once installed, then maybe
one could think that having operating system packages for these add-ons
to install them in the read-only area of the computer as an
administrative user, eg. somewhere under <tt class="docutils literal">/usr</tt>, and Firefox being
unable to modify them itself, not such a bad idea after all. Also, there
should be a way for the user to install third-party add-ons without the
interference of Mozilla, without disabling the signature checking
altogether. One way to do that would be to have additional search paths
for third-party or user created add-ons, which could also be quite
nicely outside of the reach of Firefox itsel, eg. somewhere under
<tt class="docutils literal">/usr/local</tt>. Although these examples are for Linux or Unix, I am
confident that recent versions of Windows also have some ways to make it
difficult for normal users to overwrite parts of the operating system.</p>
<p>And then there is also the general challenge of keeping the signing
infrastructure and the process itself secure, as <a class="reference external" href="https://www.sans.org/reading-room/whitepapers/critical/scary-terrible-code-signing-problem-you-36382">illustrated by this
essay at SANS</a>.</p>
<p>Generally, I think that Mozilla should not try to become first and
foremost a pressure group with some software trailing behind, although I
do agree with some of the purported ideas, but instead should focus on
actually creating a usable browser which actually delivers on all the
proclaimed privacy, secutity and other usability claims that they make.</p>
<p>Interesting link:</p>
<blockquote>
<ul class="simple">
<li><a class="reference external" href="https://blog.mozilla.org/addons/2019/05/04/update-regarding-add-ons-in-firefox/">https://blog.mozilla.org/addons/2019/05/04/update-regarding-add-ons-in-firefox/</a>
(also at <a class="reference external" href="http://archive.is/bMeoF">http://archive.is/bMeoF</a> )</li>
</ul>
</blockquote>
</div></div>Upgrading Gitea Is Painful/posts/2018-01-15-upgrading-gitea-is-painful/2018-01-15T00:00:00+01:002018-01-15T00:00:00+01:00Toni Mueller<div><p>I just wanted to upgrade a Gitea instance and, in the process, deleted
old gitea binaries. After that, pushing to a repository did not work
anymore, because the path to the gitea binary which is eg. used when
the instance or a repository is being created, is being hard-coded
into several files, and that file was just gone. So, in order to
upgrade to a newer version of gitea, you have to do the following,
assuming you run the service under the <code>gitea</code> user:</p>
<ol>
<li>In <code>~gitea/.ssh/authorized_keys</code>, you have to adjust the path of
gitea to the location of the new binary.</li>
<li>
<p>Per repo, you need to adjust the path to the <code>gitea</code> binary in the
following files:</p>
<ul>
<li>hooks/post-receive.d/gitea</li>
<li>hooks/pre-receive.d/gitea</li>
<li>hooks/update.d/gitea</li>
</ul>
</li>
</ol>
<p>Maybe more files need to be changed, but at the moment, this seems to
be enough to generally make things work again.</p>
<p>Links:</p>
<ul>
<li>https://gitea.io</li>
</ul></div>Subscribing to Google Groups by Email/posts/subscribing_to_google_groups_by_email/2013-11-19T16:52:00+01:002013-11-19T16:52:00+01:00Toni Mueller<div><p>Maybe this has been documented somewhere, but at least, I cannot find it on Google's own support pages. So here goes:</p>
<p>Google Groups have become a major mailing list hub. Unfortunately, Google tries to coerce people into using their web interface, which is something I object to for a number of reasons (it's ok to have it as a fall-back, but not as a primary interface). Now, Google Groups can be subscribed to using standard email, and here is how. As an example, I'll use the <a href="http://gitlab.org">GitLab</a> mailing list.</p>
<ol>
<li>
<p>Figure out the list address:</p>
<ol>
<li>The group is at (https://groups.google.com/forum/#!topic/gitlabhq/). In that group, select a random posting, then use the button on the right of the "Sign in to reply" button to open a drop-down menu. Select "Show original".</li>
<li>
<p>Locate the "To: " line (maybe another line - see below). For this group, it should read</p>
<p>To: gitlabhq@googlegroups.com</p>
</li>
</ol>
</li>
<li>
<p>Subscribe to this group:</p>
<ol>
<li>
<p>Use your email program to send an email to this address:</p>
<p>gitlabhq+subscribe@googlegroups.com</p>
</li>
<li>
<p>You will get a confirmation email with a token in the subject line. Reply to it.</p>
</li>
<li>
<p>You "should" get a welcome message, and be done.</p>
</li>
</ol>
</li>
</ol>
<p>I write "should", as sometimes, Google simply drops the email, and there is no way to figure out, why. Therefore, I tend to use email addresses which I already used in conjunction with Google - they work best.</p>
<p>HTH, and hopefully, Google will get their act together, one day.</p></div>(Hidden) Tracking At All Costs?/posts/hidden_tracking_at_all_costs/2013-09-09T09:33:00+02:002013-09-09T09:33:00+02:00Toni Mueller<div><p>Today, I was once more aggravated when viewing something on <code>github.com</code>, as the avatar icons for the individual users were not being displayed. It turns out that <code>github</code> has reworked their system to display such avatar icons to go to <code>gravatar.com</code>, a popular service for such purposes. The following short essay applies not only to github, which is merely taken as an example, but to other web services as well, and gives ideas about how to produce an alternative design without these problems.</p>
<p>This is, in itself, a bad move, since it turns gravatar into a massive tracking database, much like the ones at <code>doubleclick.net</code> or other advertising agencies, only with an emphasis on techie websites. That this move, and supporting this kind of tracking, was intentional, is also underlined by the fact that the actual icons are <strong>not</strong> delivered by gravatar, but by github, by redirecting to the following URL:</p>
<pre class="code literal-block"><span></span>https://identicons.github.com
</pre>
<p>So in effect, github does deliver all their icons below, but only makes a "detour" to gravatar to give them the ability to collect tracking data.</p>
<p>Apart from the profile-building property of this arrangement, this idea does also look quite dubious from a usability perspective:</p>
<ul>
<li>It involves one more service, thus reducing the availability of the overall service.</li>
<li>It results in at least two more web requests, even HTTPS, per icon, introducing a noticable delay from the user's perspective, plus additional data transfer.</li>
<li>By the same token, it involves both more CPU load on the server side, as well as on the user side.</li>
</ul>
<p>Using Firebug, I determined that the added delay for my notification page roughly varies between 1s for the fastest, and 2.5s for the last few requests, in overall page loading time. I dimly remember that conventional wisdom demands response times well under 1 second for a page to have acceptable performance. Firebug also showed that the individual icon requests were usually being processed in 1 second or under.</p>
<p>Now the questions are: Why would you introduce such delays into your website, especially if it involves added cost for everyone, without user-visible benefits? What are the hidden benefits of such a measure?</p></div>Fun With Captchas/posts/fun-with-captchas/2013-06-06T23:25:00+02:002013-06-06T23:25:00+02:00Toni Mueller<div><p>I generally understand why they are there. But not always... see for
yourself:</p>
<p><img alt="Fun with Captchas" src="/images/stackoverflow-captcha-fun-thumb-1449x930-19-thumb-600x385-21.png"></p>
<p>Can you solve that?!?</p></div>Responsive Images: Why Do We Need Another Tag?/posts/responsive_images_why_do_we_need_another_tag/2012-11-22T02:34:00+01:002012-11-22T02:34:00+01:00Toni Mueller<div><p>I recently started to read about
<a href="http://en.wikipedia.org/wiki/Responsive_web_design">responsive web design</a>,
and quickly hit the image problem. This can be summarized as "How to
deliver the correct images for a given device?"</p>
<p>There are a lot of approaches using JavaScript user agent sniffing to
tackle this problem, but none of these appears to be satisfactory. I
also came across this proposal of the
<a href="http://www.w3.org/community/respimg/">W3C Responsive Images Community Group</a>.
Regarding past problems in browser implementation, I am not at all
enthusiastic about having a new tag with complicated semantics to
support in browsers' code bases.</p>
<p>I suggest this approach, although they say that attempts at changing
the processing of <code><img /></code> tags was shot down by vendors (why? please
educate me):</p>
<p>Use a new meta tag for images or other media in the same way than
"base", only with different values for different types. Eg:</p>
<pre class="code literal-block"><span></span> <meta type="base-img" condition="CSS3 media selector statement" url="some url" />
<meta type="base-video" condition="CSS3 media selector statement" url="some url" />
</pre>
<p>That should mean: Load images from a different base URL than
ordinary stuff <strong>if their src is relative</strong>, and videos from
another URL. Expand this scheme to cover all interesting things
(<code><object /></code>, <code><video /></code>, ...).</p>
<pre class="code literal-block"><span></span> In effect, this should construct a tree for accessing resources,
that the browser can evaluate as it wants to.
</pre>
<p>The idea is that web site owners can pre-compute all desirable images
and stuff, and place them where they find things convenient, and that
browsers don't have to do much to actually request them. I expect
these additional benefits over the suggested <code><picture /></code> tag off the
top of my head:</p>
<ul>
<li>Since image (or object) URLs are stable, caching is not a problem.</li>
<li>Since image URLs are stable, images can be statically
pre-computed, and statically served from low-power servers. </li>
<li>By using CSS media selector statements, required computing power
on the client should be small, improving battery life. </li>
<li>Since the method uses CSS3 media selectors, user agent sniffing is
not required (= requires little power on the server). </li>
<li>Except for maintaining the tree of base URLs and using it to
calculate image paths, no changes to the process of digesting the
"HTML soup" are required. </li>
<li>Bandwidth to convey the required information is reduced to a few
<code><meta /></code> tags per page, not the rather verbose <code><picture /></code>
tag.</li>
<li>The method is extensible to other content types.</li>
<li>The method requires no re-learning for web designers.</li>
<li>The method is compatible with existing design tools, as it does
not change the user-facing part of the <code><img /></code> tag in any way.</li>
</ul>
<p>In case a <code><meta /></code> statement for a given tag would not be present,
the browser would fall back to the next best resource type. I suggest
this hierarchy:</p>
<ol>
<li>Not specified: Use the default method of calculating the base URL.</li>
<li>Standard <code><meta /></code> tag: Same - keep the existing behaviour.</li>
<li>Specify <code>base-img</code> as the value to the <code>type</code> attribute <code><meta
/></code>: Use this for all heavy objects.</li>
<li>Specify <code>base-video</code> as the value: Use this for videos, but
continue to use other methods to find images.</li>
<li>Conduct a topological search for base URLs, eg using this
ordering: <code>(nil) (base) (img) (object) (video) ...</code>.</li>
</ol>
<p>Yes, I am <strong>very</strong> late to the discussion, but would still like your
input and pointers to relevant arguments. Thank you!</p>
<p>License of this text: CC-BY-NC-NA 3.0</p></div>Truncated URLs in Firefox/posts/truncated-urls-in-firefox/2011-11-28T09:48:00+01:002011-11-28T09:48:00+01:00Toni Mueller<p>For some time, I have been annoyed by recent Firefox's behaviour to
truncate the front of URLs so that "http" or "https" are not shown. I
would rather have the full URL shown, and so I poked around
<code>about:config</code> and found <code>browser.urlbar.trimURLs</code>. Set this to false,
and the full URLs are shown in the urlbar (formerly known as location
bar).</p>The Case Against Google Chrome/posts/the_case_against_google_chrome/2011-08-25T16:16:00+02:002011-08-25T16:16:00+02:00Toni Mueller<div><p>There are two web browsers, based on the Google Chrome codebase:</p>
<ul>
<li>Google Chrome (of course)</li>
<li>Chromium</li>
</ul>
<p>The latter is a free-software-only version of Google Chrome, having
the spyware features of the original Google Chrome ripped out, and
that can be eg. installed in Debian using apt-get.</p>
<p>Today, I wanted to try the extensions, since the original browser is
suitable for not much more than simply looking at a web page. But if
you want any kind of extensions, like eg. maybe <em>AdBlock</em>, or the
<em>SpeedMeter</em>, or the <em>SessionManager</em>, or whatever else would benefit
you as a user, you immediately find yourself locked out of Google's
Webstore. By the way... the name is already giving away what the
problem really is: Google, like about any other vendor I am aware of,
wants to reduce <strong>you</strong> to a user, and cut down on <strong>your abilities</strong>
to create, or use the software in ways you deem fit, instead of only
ways <strong>they</strong> deem fit. So, there is eg. no simple way to download the
extension to your hard disk drive, maybe for later digestion - no, you
can, at best, install the extension online, into your current
profile. And if you somehow lose that, you get to try again. So they
can not only track every move of you, they can also manage the
availability of their extensions to you as they choose. Like eg. Ad
sales going down? Poof, no more AdBlock for you.</p>
<p>This way, you sell out your freedom and your privacy in the same way
to Google than you probably did before, to Microsoft and Apple, and a
plethora of other companies.</p>
<p>Now my question to you is: <strong>Are you prepared to accept that, and if so, why?</strong></p></div>Trackers - a Rough Overview/posts/trackers_-_a_rough_overview/2011-05-30T20:56:00+02:002011-05-30T20:56:00+02:00Toni Mueller<div><p>I've been asked to compare various issue trackers. While I don't
really feel qualified do to so, I have an opinion nonetheless. So here
are my two cents about it:</p>
<ul>
<li>
<p>There are trackers for various use cases, various technologies, and
licenses (eg. Jira is imho mostly commercial software).</p>
</li>
<li>
<p>I've not yet found a package which is equally suitable for handling
customer (self-?) support tasks outside of software development, and
software development tasks.</p>
</li>
<li>
<p>I don't have real experience with Jira, and only a very cursory
impression about eg. OTRS (Perl) and Mantis (PHP).</p>
</li>
<li>
<p>From all trackers I have seen so far, OTRS, RT (Perl) and roundup
(Python) are basically suitable to customer support tasks, but less
suitable to software development tasks.</p>
</li>
<li>
<p>OTOH, Trac and Redmine seem to support software development tasks
much better (and Redmine, written with RoR, much better than Trac,
written in Python, imho).</p>
</li>
</ul>
<p>For me, so far only Roundup and RT mattered for the customer-support
space, but I intend to take a look at OTRS, now that they claim to
support ITIL-conformant processes (whatever that means, but it's a
requirement of some potential customers). When I talk about RT, I mean
RT 3.x, not RT 4.x. I also ignore all PHP stuff for principal reasons.</p>
<ul>
<li>
<p>Roundup's advantage, compared to RT, is that it is very lightweight.</p>
</li>
<li>
<p>Roundup's permission system seems to be more flexible than RT's, but
all-in-all, changing anything requires rolling out a new revision of
the installation (eg. to include the new permissions). This stuff is
highly intertwined with the rest of roundup, and I've yet to see
(didn't try) how to eg. migrate the database from one version of the
software to the next.</p>
</li>
<li>
<p>RT's advantage is the much larger functionality out of the box, and
esp. support for distributed workflows, with auto-escalation,
re-assignment, hierarchical tickets with dependencies, statistics,
multiple external authentication sources and what-not. It's much
more heavy-weight, though, and the UI is clumsier, too. RT can be
scripted, and the scripts seem to end up in the database, making it
comparatively easy to migrate an instance. It's Perl, though, and
the main author(s) are afaik on the forefront of Perl development
themselves, so you frequently find that you have to pull in
brand-new versions of modules from CPAN that you've never heard of,
and that have had little exposure.</p>
</li>
<li>
<p>OOTB, RT's permission system is much more powerful than what is
distributed with Roundup, though.</p>
</li>
<li>
<p>Roundup seems to be much more geared towards a "one customer
project, one tracker" situation, where eg. general access control is of
not very high importance.</p>
</li>
</ul>
<p>In the software development space, integrating a tracker, a wiki, and
a repository browser was popularized probably by SourceForge, and has
led to the creation of packages like Trac and Redmine, the latter
allegedly being a clone of Trac (imho it isn't, if you run the two
side-by-side).</p>
<ul>
<li>
<p>Roundup has no integration with either a wiki or a repository browser
out of the box, so one would have to do manual work to use it in that
manner. One also has to find suitable wiki and repository browser
software to integrate with, first, and except for the wiki (MoinMoin),
there are imho no obvious candidates.</p>
</li>
<li>
<p>Of the remaining two, Redmine imho has much better support for
multi-project scenarios, seems to support a broader range of
databases, and also provides much more functionality.</p>
</li>
<li>
<p>It can also be much easier extended by Joe Average User because of
a plethora of plugins, supporting popular use cases.</p>
</li>
<li>
<p>Redmine appears to be easier to host than Roundup, using thin.</p>
</li>
</ul>
<p><strong>Links:</strong></p>
<ul>
<li><a href="http://www.roundup-tracker.org/">Roundup</a></li>
<li><a href="http://www.bestpractical.com/">RT</a></li>
<li><a href="http://trac.edgewall.org/">Trac</a></li>
<li><a href="http://www.redmine.org/">Redmine</a></li>
<li><a href="http://www.otrs.org/">OTRS</a></li>
<li><a href="http://www.atlassian.com/software/jira/">Jira</a></li>
</ul></div>ZopeProfiler on Plone4/posts/zopeprofiler_on_plone4/2011-05-04T14:03:00+02:002011-05-04T14:03:00+02:00Toni Mueller<div><p>As per the author's statement, using
<a href="http://www.dieter.handshake.de/pyprojects/zope/#bct_sec_5.8">ZopeProfiler</a>
together with Plone4 is unsupported. It really is. First, get a
<a href="http://pypi.python.org/pypi/Products.ZopeProfiler">current version</a>
of ZopeProfiler instead. Implement in your buildout as usual and run
buildout. In the relevant instance's (eg. <code>secondary</code>) <code>zope.conf</code>,
one has to enable it, too:</p>
<pre class="code literal-block"><span></span>enable-product-installation on
</pre>
<p>You also need to fix the output from the <code>pstats</code> module. In Debian,
this is located at <code>/usr/lib/python2.6/pstats.py</code>. Copy to your
virtualenv's <code>lib/python2.6</code> and manually apply the patch mentioned
here: http://bugs.python.org/issue7372</p>
<p>After that, following the instructions generally works, except for
that the site now runs orders of magnitudes slower, and (at least) I
get this error when trying to view the stats (sample traceback):</p>
<pre class="code literal-block"><span></span><span class="mi">2011</span><span class="o">-</span><span class="mi">05</span><span class="o">-</span><span class="mi">04</span> <span class="mi">13</span><span class="p">:</span><span class="mi">47</span><span class="p">:</span><span class="mi">56</span> <span class="k">ERROR</span> <span class="n">Zope</span><span class="p">.</span><span class="n">SiteErrorLog</span> <span class="mf">1304509676.940</span><span class="p">.</span><span class="mi">218731970327</span> <span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">localhost</span><span class="p">:</span><span class="mi">9082</span><span class="o">/</span><span class="n">Control_Panel</span><span class="o">/</span><span class="n">ZopeProfiler</span><span class="o">/</span><span class="n">showHigh</span>
<span class="n">Traceback</span> <span class="p">(</span><span class="n">innermost</span> <span class="n">last</span><span class="p">):</span>
<span class="k">Module</span> <span class="nn">ZPublisher.Publish</span><span class="p">,</span> <span class="n">line</span> <span class="mi">127</span><span class="p">,</span> <span class="ow">in</span> <span class="n">publish</span>
<span class="k">Module</span> <span class="nn">ZPublisher.mapply</span><span class="p">,</span> <span class="n">line</span> <span class="mi">77</span><span class="p">,</span> <span class="ow">in</span> <span class="n">mapply</span>
<span class="k">Module</span> <span class="nn">ZPublisher.Publish</span><span class="p">,</span> <span class="n">line</span> <span class="mi">47</span><span class="p">,</span> <span class="ow">in</span> <span class="n">call_object</span>
<span class="k">Module</span> <span class="nn">Shared.DC.Scripts.Bindings</span><span class="p">,</span> <span class="n">line</span> <span class="mi">324</span><span class="p">,</span> <span class="ow">in</span> <span class="n">__call__</span>
<span class="k">Module</span> <span class="nn">Shared.DC.Scripts.Bindings</span><span class="p">,</span> <span class="n">line</span> <span class="mi">361</span><span class="p">,</span> <span class="ow">in</span> <span class="n">_bindAndExec</span>
<span class="k">Module</span> <span class="nn">App.special_dtml</span><span class="p">,</span> <span class="n">line</span> <span class="mi">185</span><span class="p">,</span> <span class="ow">in</span> <span class="n">_exec</span>
<span class="k">Module</span> <span class="nn">DocumentTemplate.DT_Let</span><span class="p">,</span> <span class="n">line</span> <span class="mi">76</span><span class="p">,</span> <span class="ow">in</span> <span class="n">render</span>
<span class="k">Module</span> <span class="nn">DocumentTemplate.DT_Util</span><span class="p">,</span> <span class="n">line</span> <span class="mi">202</span><span class="p">,</span> <span class="ow">in</span> <span class="n">eval</span>
<span class="o">-</span> <span class="n">__traceback_info__</span><span class="p">:</span> <span class="n">stdnameRe</span>
<span class="k">Module</span> <span class="o"><</span><span class="kt">string</span><span class="o">></span><span class="p">,</span> <span class="n">line</span> <span class="mi">1</span><span class="p">,</span> <span class="ow">in</span> <span class="o"><</span><span class="n">module</span><span class="o">></span>
<span class="k">Module</span> <span class="nn">Products.ZopeProfiler.ZopeProfiler</span><span class="p">,</span> <span class="n">line</span> <span class="mi">237</span><span class="p">,</span> <span class="ow">in</span> <span class="n">getStatistics</span>
<span class="k">Module</span> <span class="nn">pstats</span><span class="p">,</span> <span class="n">line</span> <span class="mi">353</span><span class="p">,</span> <span class="ow">in</span> <span class="n">print_stats</span>
<span class="n">ValueError</span><span class="p">:</span> <span class="n">I</span><span class="o">/</span><span class="n">O</span> <span class="n">operation</span> <span class="k">on</span> <span class="n">closed</span> <span class="n">file</span>
</pre>
<p>I've seen the latter error on various other occasions as well,
esp. when a long time has passed between the original activity and the
display of results (eg. when running <code>ExternalMethod</code>s). If someone
has a fix for <strong>that</strong>, I'd highly appreciate it!</p></div>