Somehow the password form was limiting the number of characters to 50.
Private Wiki Extension relies on passwords that are 64 characters in length,
so I have increased the limit.
It was checking if the number of utf8 characters is <= 120, but obviously utf8
characters could be up to 4 bytes in length. 120*4 is much greater than 255
(which is a limit on lots of file systems), so it attempted to write files
with illegal filenames. This is now fixed.
Additionally $PageNameLimit option was added. Now you can choose max page
name length yourself (which is very useful for extensions like mac.pl or
private-wiki.pl)
These flags are essential for security. The problem we are trying to
solve is the following:
1) you visit a wiki using HTTPS and you set your password.
2) you accidentally visit the same website using plain HTTP
3) although you don't notice, your cookies are sent over the insecure
connection.
Even if that website has redirection, even it denies any insecure
traffic, your cookie is still leaked. That's how cookies work.
"secure" and "httponly" flags solve this. It means that these cookies
will only be sent over a secure connection. If you have set your
password using HTTPS and later you visit the same wiki using plain HTTP,
it will look like you are not logged in (because these cookies will not
be used when you access the website using a non-secure connection).
If you have HTTPS on your website -- ALWAYS make sure that you set your
password using it. Alternatively redirect all non-secure requests to
HTTPS - that's even better.
If you set your password using HTTP, the same cookie will be used for
both HTTP and HTTPS requests - this is done for compatibility with
HTTP-only wikis.
$ENV{'HTTPS'} returns 'on' or empty string. 'on' is truthy so it
should not create any problems, we can safely use it.
I've tested this, it works as expected.
We had a problem in the following situation: User A starts editing a
page at t1. This timestamp is stored in a the parameter oldtime. In the
meantime user B edits and saves the same page at t2. If user A saves,
the changes will be merged. If user A previews and saves later, the
changes would not be merged because the preview changed oldtiem from t1
to t2. This commit makes sure that the an oldtime parameter is prefered
over the actual page timestamp.
10 is just too low. For wikis with css page it means that you can only
fetch about 5 pages in 20 seconds.
I've seen my users complain about that limit and I've seen it myself
too many times.
Doubling the $SurgeProtectionViews should make it more sane.
The old code decided whether the show the replacement field as part of
DoSearch and set $ReplaceForm. By that time, GetHeader had already
called GetSearchForm, so it was too late. I'm not sure why the
variable was necessary in the first place and so I'm removing it
entirely.
grep ReplaceForm *.pl modules/*.pl t/*.t comes up empty, now.
This command was used:
find . -type f -print0 | xargs -0 sed -i 's/return undef/return/g'
The idea behind this commit is described on http://oddmuse.org/wiki/Refactoring page.
In short: 'return undef' returns (undef) in list context (a list with one element),
which is wrong.
The summary for uploaded files had nested p elements; this was
removed. When no summary is provided, we now remove the "#FILE..."
stuff. In this case, no summary is better.
When closing the pipe to grep, check the status returned by the child
process in $? and return all pages if there was an error (which means
that grep did not filter any pages).
As explained on my blog
<https://alexschroeder.ch/wiki/2015-01-13_Handwritten_Optimization>,
the current implementation is "suddenly" very slow. This is specially
noticeable when loading large pages. Without quite understanding how
this is possible, I'm reverting to the old implementation.
A test for the action in GetRobots makes sure that the robots meta
element gets a NOINDEX if the action is not "browse". Since the
default is "browse", this means we need to set an action for
search=foo and match=foo.
A long time ago, $NewComment was the default text for the comment form,
ie. aftertext. That's why the code still had some comparisons of
aftertext with $NewComment. Now that $NewComment is a label in the
comment form and no longer the content of the text area, these tests can
be removed.
To facilitate future debugging, STDERR now also gets the UTF-8 layer.
Apparently CGI does not decode UTF-8 encoded URL parameters. Handle this
case in GetParam.
PageHtml can be called when STDOUT already has the UTF-8 layer. It needs
to be able to handle both cases. That's why we call binmode without any
layers and then we call binmode with the UTF-8 layer again. Now it will
work for RSS files as well.
Unrelated fix: In order to force a decent Etag header even if no index
file exists (and thus $LastUpdate is undef), we use $Now as an
alternative.
binmode adding utf8 layer to STDOUT resulted in double encoded pages
included via PageHtml. On my homepage I was appending the comments to
every page using the following code:
my $target = $CommentsPrefix . $id;
my $page = '';
$page = PageHtml($target) if $IndexHash{$target};
print $q->div({-class=>'comment'},
$q->h2(T('Comments')),
$page);