Date | Commit message (Collapse) |
|
We need to clear the UID-offset-to-MSN mapping when
leaving mailboxes via EXAMINE/SELECT/CLOSE.
Furthermore, uo2m_last_uid() needs to account for tiny mailboxes
where the scalar representation of {uo2m} may be evaluated to
`false' in a boolean context.
|
|
We no longer pass an arrayref to search_common() or
parse_query(), so handle the CHARSET directive in
the Parse::RecDescent-generated parser directly.
|
|
For properly parsing IMAP search requests, it's easier to use a
recursive descent parser generator to deal with subqueries and
the "OR" statement.
Parse::RecDescent was chosen since it's mature, well-known,
widely available and already used by our optional dependencies:
Inline::C and Mail::IMAPClient. While it's possible to build
Xapian queries without using the Xapian string query parser;
this iteration of the IMAP parser still builds a string which is
passed to Xapian's query parser for ease-of-diagnostics.
Since this is a recursive descent parser dealing with untrusted
inputs, subqueries have a nesting limit of 10. I expect that is
more than adequate for real-world use.
|
|
Since we support MSNs properly, now, it seems acceptable
to support regular SEARCH requests in case there are any
clients which still use non-UID SEARCH.
|
|
stop_idle was a noop when the client issues a "DONE"
continuation or just disconnects. This would not have
led to a long term memory leak since FDs get closed and
reused, anyways, and all of our InboxIdle mappings are
keyed by FD.
|
|
Since IMAP IDLE users aren't expected to issue any commands, we
can terminate their connections immediately on graceful
shutdown.
Furthermore, we need to drop the inotify FD from the epoll set
to avoid warnings during global destruction. Embarassingly,
this required fixing wacky test ordering from 2a717d13f10fcdc6
("nntpd+imapd: detect replaced over.sqlite3")
|
|
"DONE" is a continuation and not a normal IMAP command, so
ensure it can't be called like a normal IMAP command which
has a tag.
|
|
Since we limit our mailboxes slices to 50K and can guarantee a
contiguous UID space for those mailboxes, we can store a mapping
of "UID offsets" (not full UIDs) to Message Sequence Numbers as
an array of 16-bit unsigned integers in a 100K scalar.
For UID-only FETCH responses, we can momentarily unpack the
compact 100K representation to a ~1.6M Perl array of IV/UV
elements for a slight speedup.
Furthermore, we can (ab)use hash key deduplication in Perl5 to
deduplicate this 100K scalar across all clients with the same
mailbox slice open.
Technically we can increase our slice size to 64K w/o increasing
our storage overhead, but I suspect humans are more accustomed
to slices easily divisible by 10.
|
|
This finally seems to make mutt header caching behave properly.
We expect to be able to safely load 50K IV/UVs in memory without
OOM, since that's "only" 1.6 MB that won't live beyond a single
event loop iteration. So create a simple array which can
quickly map MSNs in requests to UIDs and not leave out messages.
MSNs in the FETCH response will NOT be correct, since it's
inefficient to implement properly and mutt doesn't seem to
care.
Since the conversion code is easily shared, "UID SEARCH" can
allow the same MSN => UID mapping non-UID "FETCH" does.
|
|
Supporting MSNs in long-lived connections beyond the lifetime of
a single request/response cycle is not scalable to a C10K
scenario. It's probably not needed, since most clients seem to
use UIDs.
A somewhat efficient implementation I can come up uses
pack("S*" ...) (AKA "uint16_t mapping[50000]") has an overhead
of 100K per-client socket on a mailbox with 50K messages. The
100K is a contiguous scalar, so it could be swapped out for
idle clients on most architectures if THP is disabled.
An alternative could be to use a tempfile as an allocator
partitioned into 100K chunks (or SQLite); but I'll only do that
if somebody presents a compelling case to support MSN SEARCH.
|
|
Note some of our limitations for potential hackers.
We'll be renaming "UID_BLOCK" to "UID_SLICE", since "block" is
overused term and "slice" isn't used in our codebase. Also,
document how "slice" and "epochs" are similar concepts for
different clients.
|
|
Simple queries work, more complex queries involving parentheses,
"OR", "NOT" don't work, yet.
Tested with "=b", "=B", and "=H" search and limits in mutt
on both v1 and v2 with multiple Xapian shards.
|
|
We can share a bit of code with FETCH to refill UID
ranges which hit the SQLite overview.
|
|
We can get exact values for EXISTS, UIDNEXT using SQLite
rather than calculating off $ibx->mm->max ourselves.
Furthermore, $ibx->mm is less useful than $ibx->over for IMAP
(and for our read-only daemons in general) so do not depend on
$ibx->mm outside of startup/reload to save FDs and reduce kernel
page cache footprint.
|
|
This appears to significantly improve header caching behavior
with mutt. With the current public-inbox.org/git mirror(*),
mutt will only re-FETCH the last ~300 or so messages in the
final "inbox.comp.version-control.git.7" mailbox, instead of
~49,000 messages every time.
It's not perfect, but a 500ms query is better than a >10s query
and mutt itself spends as much time loading its header cache.
(*) there are many gaps in NNTP article numbers (UIDs) due to
spam removal from public-inbox-learn.
|
|
Since headers are big and include a lot of lines MUAs don't
care about, we can skip the CRLF_HDR ops and just do the
CRLF conversion in partial_hdr_get and partial_hdr_not.
This is another 10-15% speedup for mutt w/o header caching.
|
|
This speeds up requests from mutt for HEADER.FIELDS by around 10%
since we don't waste time doing CRLF conversion on large message
bodies that get discarded, anyways.
|
|
Ensure {uid_base} is always set, so we don't need to add `//'
checks everywhere. Furthermore, this fixes a hard-to-test bug
where the STATUS command would inadvertantly clobber {uid_base}.
|
|
The performance problem with mutt not using header caches isn't
fixed, yet, but mutt header caching seems to depend on MSNs
(message sequence numbers). We'll switch to storing the 0-based
{uid_base} instead of the 1-based {uid_min} since it simplifies
most of our code.
|
|
RFC 2683 section 3.2.1.5 recommends it:
> For its part, a server should allow for a command line of at least
> 8000 octets. This provides plenty of leeway for accepting reasonable
> length commands from clients. The server should send a BAD response
> to a command that does not end within the server's maximum accepted
> command length.
To conserve memory, we won't bother reading the entire line
before sending the BAD response and disconnecting them.
|
|
While selecting a mailbox is done case-insensitively, "INBOX" is
special for the LIST command, according to RFC 3501 6.3.8:
> The special name INBOX is included in the output from LIST, if
> INBOX is supported by this server for this user and if the
> uppercase string "INBOX" matches the interpreted reference and
> mailbox name arguments with wildcards as described above. The
> criteria for omitting INBOX is whether SELECT INBOX will
> return failure; it is not relevant whether the user's real
> INBOX resides on this or some other server.
Thus, the existing news.public-inbox.org convention of naming
newsgroups starting with "inbox." needs to be special-cased to
not confuse clients.
While we're at it, do not create ".0" for dummy newsgroups if
they're selected, either.
|
|
It seems required based on my reading of RFC 3501 for
the non-UID "FETCH" command.
|
|
Since we started indexing the CRLF-adjusted size of messages,
we can take an order-of-magnitude speedup for certain MUAs
which fetch this attribute without needing much else.
Admins are encouraged to --reindex existing inboxes for IMAP
support, anyways. It won't be fatal if it's not reindexed, but
some client bugs and warnings can be fixed and they'll be able
to support more of IMAP.
|
|
This is one boolean attribute not worth wasting space for.
With 20000 sockets, this reduces RSS by around 5% at a glance,
and locked hashes doesn't do us much good when clients
use compression, anyways.
|
|
RFC 3501 section 5.4 requires this to be >= 30 minutes,
10x higher than what is recommended for NNTP. Fortunately
our design is reasonably memory-efficient despite being Perl.
|
|
We should not waste memory for IDLE unless it's used on the most
recent inbox slice. We also need to keep the IDLE connection
alive regardless of $PublicInbox::DS::EXPTIME.
|
|
We can speed up this common mutt request by another 2-3x by not
loading the entire smsg from SQLite, just the UID.
|
|
We can avoid loading the entire message from git when mutt makes
a "UID FETCH" request for "(UID FLAGS)". This speeds mutt up by
more than an order-of-magnitude in informal measurements.
|
|
This is just a hair faster and cacheable in the future, if we
need it. Most notably, this avoids doing PublicInbox::Eml->new
for simple "RFC822", "BODY[]", and "RFC822.SIZE" requests.
|
|
Dummy messages make for bad user experience with MUAs which
still use sequence numbers. Not being able to fetch a message
doesn't seem fatal in mutt, so just ignore (sometimes large)
gaps.
|
|
Since it seems somewhat common for IMAP clients to limit
searches by sent Date: or INTERNALDATE, we can rely on
the NNTP/WWW-optimized overview DB.
For other queries, we'll have to depend on the Xapian DB.
|
|
We won't support searching across mailboxes, just yet;
but maybe in the future.
|
|
None of the new cases are wired up, yet, but existing cases
still work.
|
|
No point in spewing "uninitialized" warnings into logs when
the cat jumps on the Enter key.
|
|
We can share code between them and account for each 50K
mailbox slice. However, we must overreport these for
non-zero slices and just return lots of empty data for
high-numbered slices because some MUAs still insist
on non-UID fetches.
|
|
Some clients insist on sending "INBOX" in all caps,
since it's special in RFC 3501.
|
|
Having two large numbers separated by a dash can make visual
comparisons difficult when numbers are in the 3,000,000 range
for LKML. So avoid the $UID_END value, since it can be
calculated from $UID_MIN. And we can avoid large values of
$UID_MIN, too, by instead storing the block index and just
multiplying it by 50000 (and adding 1) on the server side.
Of course, LKML still goes up to 72, at the moment.
|
|
Finish up the IMAP-only portion of iterative config reloading,
which allows us to create all sub-ranges of an inbox up front.
The InboxIdler still uses ->each_inbox which will struggle with
100K inboxes.
Having messages in the top-level newsgroup name of an inbox will
still waste bandwidth for clients which want to do full syncs
once there's a rollover to a new 50K range. So instead, make
every inbox accessible exclusively via 50K slices in the form of
"$NEWSGROUP.$UID_MIN-$UID_END".
This introduces the DummyInbox, which makes $NEWSGROUP
and every parent component a selectable, empty inbox.
This aids navigation with mutt and possibly other MUAs.
Finally, the xt/perf-imap-list maintainer test is broken, now,
so remove it. The grep perlfunc is already proven effective,
and we'll have separate tests for mocking out ~100k inboxes.
|
|
This limit on mailbox size should keep users of tools like
mbsync (isync) and offlineimap happy, since typical filesystems
struggle with giant Maildirs.
I chose 50K since it's a bit more than what LKML typically sees
in a month and still manages to give acceptable performance on
my ancient Centrino laptop.
There were also no responses to my original proposal at:
<https://public-inbox.org/meta/20200519090000.GA24273@dcvr/>
so no objections, either :>
|
|
"$UID_START:*" needs to return at least one message according
to RFC 3501 section 6.4.8.
While we're in the area, coerce ranges to (unsigned) integers by
adding zero ("+ 0") to reduce memory overhead.
|
|
Trying to avoid a circular reference by relying on $ibx object
here makes no sense, since skipping GitCatAsync::close will
result in an FD leak, anyways. So keep GitAsyncCat contained to
git-only operations, since we'll be using it for Solver in the
distant feature.
|
|
Since IMAP yields control to GitAsyncCat, IMAP->event_step may
be invoked with {long_cb} still active. We must be sure to
bail out of IMAP->event_step if that happens and continue to let
GitAsyncCat drive IMAP.
This also improves fairness by never processing more than one
request per ->event_step.
|
|
The RFC 3501 `sequence-set' definition allows comma-delimited
ranges, so we'll support it in case clients send them.
Coalescing overlapping ranges isn't required, so we won't
support it as such an attempt to save bandwidth would waste
memory on the server, instead.
|
|
Since we only support read-only operation, we can't save
subscriptions requested by clients. So just list no inboxes as
subscribed, some MUAs may blindly try to fetch everything its
subscribed to.
|
|
This ought to improve overall performance with multiple clients.
Single client performance suffers a tiny bit due to extra
syscall overhead from epoll.
This also makes the existing async interface easier-to-use,
since calling cat_async_begin is no longer required.
|
|
While we can't memoize the regexp forever like we do with other
Eml users, we can still benefit from caching regexp compilation
on a per-request basis.
A FETCH request from mutt on a 4K message inbox is around 8%
faster after this. Since regexp compilation via qr// isn't
unbearably slow, a shared cache probably isn't worth the
trouble of implementing. A per-request cache seems enough.
|
|
It seems worthless to support CLOSE for read-only inboxes, but
mutt sends it, so don't return a BAD error with proper use.
|
|
They're not specified in RFC 3501 for responses, and at least
mutt fails to handle it.
|
|
We'll return dummy messages for now when sequence numbers go
missing, in case clients can't handle missing messages.
|
|
While the contents of normal %want hash keys are bounded in
size, %partial can cause more overhead and lead to repeated sort
calls on multi-message fetches. So sort it once and use
arrayrefs to make the data structure more compact.
|