Found in 10 comments on Hacker News
DamonHD · 2022-10-31 · Original thread
You had to know things and keep them in your head, and have a pile of text books otherwise! Plus no normal app had access to gigabytes of storage, and indeed that was a vast amount for many years more[1]. Skills learnt then in terms of memory and CPU efficiency are still valuable now, though your typical smartphone is more powerful than the supercomputer replacements I used to look after for an oil company...

[1] https://www.amazon.co.uk/Managing-Gigabytes-Compressing-Inde... (1999)

skoczko · 2022-08-25 · Original thread
If by a "search engine" you mean a tool to index and retrieve documents (essentially what the terms mean in the traditional Information Retrieval, e.g Lucene, SOLR, Elastic) then this is a pretty good on the subject that taught me a lot:

https://www.amazon.com/Managing-Gigabytes-Compressing-Multim...

mindcrime · 2021-12-02 · Original thread
I don't even know if anybody has written a book specifically about search at "web scale" (no MongoDB jokes here, please). But about the closest things I know of would be something like:

https://www.amazon.com/Managing-Gigabytes-Compressing-Multim...

https://www.amazon.com/Information-Retrieval-Implementing-Ev...

https://www.amazon.com/Introduction-Information-Retrieval-Ch...

PaulHoule · 2021-06-18 · Original thread
This book has a nice treatment of that kind of compression:

https://www.amazon.com/Managing-Gigabytes-Compressing-Multim...

For instance you might be keep track of facts like

   the word "the" is contained in document 1
   the word "john" is contained in document 1
   the word "the" is contained in document 2
   ...
   the word "john" is contained in document 12
and you code the gaps; the word "the" appears in every document and the gap is always 1, but the gap for "john" is 11. With a variable-sized encoding you use fewer bits for smaller gaps -- with that kind of encoding you don't have to make "the" be a stopword because you can afford to encode all the postings.

nostrademons · 2017-09-13 · Original thread
Separate out the concepts of "search infrastructure" (how documents and posting lists are stored in terms of bits on disk & RAM) and "ranking functions" (how queries are matched to documents).

The former is basically a solved problem. Lucene/ElasticSearch and Google are using basically the same techniques, and you can read about them in Managing Gigabytes [1], which was first published over 2 decades ago. Google may be a generation or so ahead - they were working on a new system to take full advantage of SSDs (which turn out to be very good for search, because it's a very read-heavy workload) when I left, and I don't really know the details of it. But ElasticSearch is a perfectly adequate retrieval system, and it does basically the same stuff that Google's systems did circa 2013, and even does some stuff better than Google.

The real interesting work in search is in ranking functions, and this is where nobody comes close to Google. Some of this, as other commenters note, is because Google has more data than anyone else. Some of it is just because there've been more man-hours poured into it. IMHO, it's pretty doubtful that an open-source project could attract that sort of focused knowledge-work (trust me; it's pretty laborious) when Google will pay half a mil per year for skilled information-retrieval Ph.Ds.

[1] https://www.amazon.com/Managing-Gigabytes-Compressing-Multim...

verytrivial · 2017-04-26 · Original thread
That name sound very familiar, as does the feature set. Managing Gigabytes[1], or "mg" was the output of a University of Melbourne and RMIT research in the 1990s. It went on to be commercialized as SIM and later TeraText[2] and has largely disappeared into the government intelligence indexing and consulting-heavy systems space (where it is now presumably being trounced by Palantir).

[1] https://www.amazon.com/Managing-Gigabytes-Compressing-Indexi... - Note review from Peter Norvig!

[2] http://www.teratext.com/

drblast · 2014-07-28 · Original thread
If you enjoy reading articles about the rediscovery of indexing large amounts of read-only data, I'd highly recommend reading this book which is a treasure trove about this kind of work:

http://www.amazon.com/Managing-Gigabytes-Compressing-Multime...

sajid · 2011-12-21 · Original thread
I recommend reading 'Managing Gigabytes' by Witten, Moffat and Bell:

http://www.amazon.com/Managing-Gigabytes-Compressing-Multime...

dejv · 2010-05-04 · Original thread
You can take a look on Managing Gigabytes (http://www.amazon.com/Managing-Gigabytes-Compressing-Multime...)

It is nice book, but might be little bit outdated.

slackerIII · 2008-03-03 · Original thread
I always have to plug Managing Gigabytes whenever a discussion of computer books comes up. Great reference for anyone dealing with searching or compressing large amounts of information:

http://www.amazon.com/Managing-Gigabytes-Compressing-Multime...

Fresh book recommendations delivered straight to your inbox every Thursday.