Skip to content

POC: Tantivy documents as a trait #2071

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 69 commits into from
Oct 2, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
a044195
fix windows build (#1)
ChillFish8 Jun 3, 2023
fd65797
Add doc traits
ChillFish8 Jun 5, 2023
f5e570d
Add field value iter
ChillFish8 Jun 5, 2023
82ccece
Add value and serialization
ChillFish8 Jun 5, 2023
6bfa500
Adjust order
ChillFish8 Jun 5, 2023
49b5414
Fix bug
ChillFish8 Jun 5, 2023
08eaa17
Correct type
ChillFish8 Jun 5, 2023
18a6680
Fix generic bugs
ChillFish8 Jun 5, 2023
843ce14
Reformat code
ChillFish8 Jun 5, 2023
acbda6a
Add generic to index writer which I forgot about
ChillFish8 Jun 5, 2023
eb8ada7
Fix missing generics on single segment writer
ChillFish8 Jun 5, 2023
597e8c5
Add missing type export
ChillFish8 Jun 7, 2023
9e73ada
Add default methods for convenience
ChillFish8 Jun 7, 2023
f2ecb61
Cleanup
ChillFish8 Jun 7, 2023
1e61736
Fix more-like-this query to use standard types
ChillFish8 Jun 8, 2023
7b6293a
Update API and fix tests
ChillFish8 Jun 9, 2023
73a344c
Merge branch 'quickwit-oss:main' into main
ChillFish8 Jun 9, 2023
da6f81d
Add doc traits
ChillFish8 Jun 5, 2023
baf4be2
Add field value iter
ChillFish8 Jun 5, 2023
91dbe3f
Add value and serialization
ChillFish8 Jun 5, 2023
2b75e2e
Adjust order
ChillFish8 Jun 5, 2023
e958f3a
Fix bug
ChillFish8 Jun 5, 2023
ab6fbde
Correct type
ChillFish8 Jun 5, 2023
c2bc724
Rebase main and fix conflicts
ChillFish8 Jun 5, 2023
e81c4b8
Reformat code
ChillFish8 Jun 5, 2023
a7bfd43
Merge upstream
ChillFish8 Jun 5, 2023
6180e30
Fix missing generics on single segment writer
ChillFish8 Jun 5, 2023
2d48f0d
Add missing type export
ChillFish8 Jun 7, 2023
037ad54
Add default methods for convenience
ChillFish8 Jun 7, 2023
ed48f06
Cleanup
ChillFish8 Jun 7, 2023
54ec1b1
Fix more-like-this query to use standard types
ChillFish8 Jun 8, 2023
684686e
Update API and fix tests
ChillFish8 Jun 9, 2023
b0727ee
Merge remote-tracking branch 'origin/add-doc-as-trait-v2' into add-do…
ChillFish8 Jun 9, 2023
8b4de41
Add tokenizer improvements from previous commits
ChillFish8 Jun 9, 2023
34278eb
Add tokenizer improvements from previous commits
ChillFish8 Jun 9, 2023
c33c419
Reformat
ChillFish8 Jun 9, 2023
29962b8
Fix unit tests
ChillFish8 Jun 9, 2023
d080019
Fix unit tests
ChillFish8 Jun 9, 2023
3456423
Use enum in changes
ChillFish8 Jul 6, 2023
eb97a8c
Stage changes
ChillFish8 Jul 8, 2023
a44ca00
Add new deserializer logic
ChillFish8 Jul 9, 2023
f01020a
Add serializer integration
ChillFish8 Jul 9, 2023
8aa3d94
Add document deserializer
ChillFish8 Jul 9, 2023
5562bee
Implement new (de)serialization api for existing types
ChillFish8 Jul 9, 2023
ac2d428
Fix bugs and type errors
ChillFish8 Jul 9, 2023
1a8579b
Add helper implementations
ChillFish8 Jul 9, 2023
1884a01
Fix errors
ChillFish8 Jul 9, 2023
11164b4
Reformat code
ChillFish8 Jul 9, 2023
1a9586f
Add unit tests and some code organisation for serialization
ChillFish8 Jul 9, 2023
c25aa36
Add unit tests to deserializer
ChillFish8 Jul 9, 2023
e5d68bf
Add some small docs
ChillFish8 Jul 9, 2023
30639df
Add support for deserializing serde values
ChillFish8 Jul 9, 2023
ae519e9
Reformat
ChillFish8 Jul 9, 2023
895686e
Fix typo
ChillFish8 Jul 9, 2023
02335fe
Fix typo
ChillFish8 Jul 9, 2023
46ceccd
Change repr of facet
ChillFish8 Jul 15, 2023
08fcf3a
Remove unused trait methods
ChillFish8 Jul 25, 2023
59cbd59
Add child value type
ChillFish8 Jul 28, 2023
365d173
Merge branch 'main' into add-doc-as-trait-v2
ChillFish8 Sep 27, 2023
43ac334
Resolve comments
ChillFish8 Sep 27, 2023
9b6b94a
Fix build
ChillFish8 Sep 27, 2023
b0d61f1
Fix more build errors
ChillFish8 Sep 27, 2023
52db5ad
Fix more build errors
ChillFish8 Sep 27, 2023
41f594d
Fix the tests I missed
ChillFish8 Sep 27, 2023
e26dbbb
Fix examples
ChillFish8 Sep 27, 2023
45d5222
fix numerical order, serialize PreTok Str
PSeitz Oct 2, 2023
b6a06c2
fix coverage
PSeitz Oct 2, 2023
af535f4
rename Document to TantivyDocument, rename DocumentAccess to Document
PSeitz Oct 2, 2023
841d9ec
fix coverage
PSeitz Oct 2, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 18 additions & 13 deletions benches/index-bench.rs
Original file line number Diff line number Diff line change
Expand Up @@ -39,9 +39,9 @@ pub fn hdfs_index_benchmark(c: &mut Criterion) {
let lines = get_lines(HDFS_LOGS);
b.iter(|| {
let index = Index::create_in_ram(schema.clone());
let index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
let index_writer: IndexWriter = index.writer_with_num_threads(1, 100_000_000).unwrap();
for doc_json in &lines {
let doc = schema.parse_document(doc_json).unwrap();
let doc = Document::parse_json(&schema, doc_json).unwrap();
index_writer.add_document(doc).unwrap();
}
})
Expand All @@ -50,9 +50,10 @@ pub fn hdfs_index_benchmark(c: &mut Criterion) {
let lines = get_lines(HDFS_LOGS);
b.iter(|| {
let index = Index::create_in_ram(schema.clone());
let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
let mut index_writer: IndexWriter =
index.writer_with_num_threads(1, 100_000_000).unwrap();
for doc_json in &lines {
let doc = schema.parse_document(doc_json).unwrap();
let doc = Document::parse_json(&schema, doc_json).unwrap();
index_writer.add_document(doc).unwrap();
}
index_writer.commit().unwrap();
Expand All @@ -62,9 +63,9 @@ pub fn hdfs_index_benchmark(c: &mut Criterion) {
let lines = get_lines(HDFS_LOGS);
b.iter(|| {
let index = Index::create_in_ram(schema_with_store.clone());
let index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
let index_writer: IndexWriter = index.writer_with_num_threads(1, 100_000_000).unwrap();
for doc_json in &lines {
let doc = schema.parse_document(doc_json).unwrap();
let doc = Document::parse_json(&schema, doc_json).unwrap();
index_writer.add_document(doc).unwrap();
}
})
Expand All @@ -73,9 +74,10 @@ pub fn hdfs_index_benchmark(c: &mut Criterion) {
let lines = get_lines(HDFS_LOGS);
b.iter(|| {
let index = Index::create_in_ram(schema_with_store.clone());
let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
let mut index_writer: IndexWriter =
index.writer_with_num_threads(1, 100_000_000).unwrap();
for doc_json in &lines {
let doc = schema.parse_document(doc_json).unwrap();
let doc = Document::parse_json(&schema, doc_json).unwrap();
index_writer.add_document(doc).unwrap();
}
index_writer.commit().unwrap();
Expand All @@ -86,7 +88,8 @@ pub fn hdfs_index_benchmark(c: &mut Criterion) {
b.iter(|| {
let index = Index::create_in_ram(dynamic_schema.clone());
let json_field = dynamic_schema.get_field("json").unwrap();
let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
let mut index_writer: IndexWriter =
index.writer_with_num_threads(1, 100_000_000).unwrap();
for doc_json in &lines {
let json_val: serde_json::Map<String, serde_json::Value> =
serde_json::from_str(doc_json).unwrap();
Expand All @@ -113,7 +116,7 @@ pub fn gh_index_benchmark(c: &mut Criterion) {
b.iter(|| {
let json_field = dynamic_schema.get_field("json").unwrap();
let index = Index::create_in_ram(dynamic_schema.clone());
let index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
let index_writer: IndexWriter = index.writer_with_num_threads(1, 100_000_000).unwrap();
for doc_json in &lines {
let json_val: serde_json::Map<String, serde_json::Value> =
serde_json::from_str(doc_json).unwrap();
Expand All @@ -127,7 +130,8 @@ pub fn gh_index_benchmark(c: &mut Criterion) {
b.iter(|| {
let json_field = dynamic_schema.get_field("json").unwrap();
let index = Index::create_in_ram(dynamic_schema.clone());
let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
let mut index_writer: IndexWriter =
index.writer_with_num_threads(1, 100_000_000).unwrap();
for doc_json in &lines {
let json_val: serde_json::Map<String, serde_json::Value> =
serde_json::from_str(doc_json).unwrap();
Expand All @@ -154,7 +158,7 @@ pub fn wiki_index_benchmark(c: &mut Criterion) {
b.iter(|| {
let json_field = dynamic_schema.get_field("json").unwrap();
let index = Index::create_in_ram(dynamic_schema.clone());
let index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
let index_writer: IndexWriter = index.writer_with_num_threads(1, 100_000_000).unwrap();
for doc_json in &lines {
let json_val: serde_json::Map<String, serde_json::Value> =
serde_json::from_str(doc_json).unwrap();
Expand All @@ -168,7 +172,8 @@ pub fn wiki_index_benchmark(c: &mut Criterion) {
b.iter(|| {
let json_field = dynamic_schema.get_field("json").unwrap();
let index = Index::create_in_ram(dynamic_schema.clone());
let mut index_writer = index.writer_with_num_threads(1, 100_000_000).unwrap();
let mut index_writer: IndexWriter =
index.writer_with_num_threads(1, 100_000_000).unwrap();
for doc_json in &lines {
let json_val: serde_json::Map<String, serde_json::Value> =
serde_json::from_str(doc_json).unwrap();
Expand Down
15 changes: 15 additions & 0 deletions common/src/datetime.rs
Original file line number Diff line number Diff line change
@@ -1,11 +1,14 @@
#![allow(deprecated)]

use std::fmt;
use std::io::{Read, Write};

use serde::{Deserialize, Serialize};
use time::format_description::well_known::Rfc3339;
use time::{OffsetDateTime, PrimitiveDateTime, UtcOffset};

use crate::BinarySerializable;

/// Precision with which datetimes are truncated when stored in fast fields. This setting is only
/// relevant for fast fields. In the docstore, datetimes are always saved with nanosecond precision.
#[derive(
Expand Down Expand Up @@ -164,3 +167,15 @@ impl fmt::Debug for DateTime {
f.write_str(&utc_rfc3339)
}
}

impl BinarySerializable for DateTime {
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> std::io::Result<()> {
let timestamp_micros = self.into_timestamp_micros();
<i64 as BinarySerializable>::serialize(&timestamp_micros, writer)
}

fn deserialize<R: Read>(reader: &mut R) -> std::io::Result<Self> {
let timestamp_micros = <i64 as BinarySerializable>::deserialize(reader)?;
Ok(Self::from_timestamp_micros(timestamp_micros))
}
}
38 changes: 38 additions & 0 deletions common/src/serialize.rs
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
use std::borrow::Cow;
use std::io::{Read, Write};
use std::{fmt, io};

Expand Down Expand Up @@ -249,6 +250,43 @@ impl BinarySerializable for String {
}
}

impl<'a> BinarySerializable for Cow<'a, str> {
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
let data: &[u8] = self.as_bytes();
VInt(data.len() as u64).serialize(writer)?;
writer.write_all(data)
}

fn deserialize<R: Read>(reader: &mut R) -> io::Result<Cow<'a, str>> {
let string_length = VInt::deserialize(reader)?.val() as usize;
let mut result = String::with_capacity(string_length);
reader
.take(string_length as u64)
.read_to_string(&mut result)?;
Ok(Cow::Owned(result))
}
}

impl<'a> BinarySerializable for Cow<'a, [u8]> {
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
VInt(self.len() as u64).serialize(writer)?;
for it in self.iter() {
it.serialize(writer)?;
}
Ok(())
}

fn deserialize<R: Read>(reader: &mut R) -> io::Result<Cow<'a, [u8]>> {
let num_items = VInt::deserialize(reader)?.val();
let mut items: Vec<u8> = Vec::with_capacity(num_items as usize);
for _ in 0..num_items {
let item = u8::deserialize(reader)?;
items.push(item);
}
Ok(Cow::Owned(items))
}
}

#[cfg(test)]
pub mod test {

Expand Down
6 changes: 3 additions & 3 deletions examples/aggregation.rs
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ use tantivy::aggregation::agg_result::AggregationResults;
use tantivy::aggregation::AggregationCollector;
use tantivy::query::AllQuery;
use tantivy::schema::{self, IndexRecordOption, Schema, TextFieldIndexing, FAST};
use tantivy::Index;
use tantivy::{Index, IndexWriter, TantivyDocument};

fn main() -> tantivy::Result<()> {
// # Create Schema
Expand Down Expand Up @@ -132,10 +132,10 @@ fn main() -> tantivy::Result<()> {

let stream = Deserializer::from_str(data).into_iter::<Value>();

let mut index_writer = index.writer(50_000_000)?;
let mut index_writer: IndexWriter = index.writer(50_000_000)?;
let mut num_indexed = 0;
for value in stream {
let doc = schema.parse_document(&serde_json::to_string(&value.unwrap())?)?;
let doc = TantivyDocument::parse_json(&schema, &serde_json::to_string(&value.unwrap())?)?;
index_writer.add_document(doc)?;
num_indexed += 1;
if num_indexed > 4 {
Expand Down
10 changes: 5 additions & 5 deletions examples/basic_search.rs
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
use tantivy::collector::TopDocs;
use tantivy::query::QueryParser;
use tantivy::schema::*;
use tantivy::{doc, Index, ReloadPolicy};
use tantivy::{doc, Index, IndexWriter, ReloadPolicy};
use tempfile::TempDir;

fn main() -> tantivy::Result<()> {
Expand Down Expand Up @@ -75,7 +75,7 @@ fn main() -> tantivy::Result<()> {
// Here we give tantivy a budget of `50MB`.
// Using a bigger memory_arena for the indexer may increase
// throughput, but 50 MB is already plenty.
let mut index_writer = index.writer(50_000_000)?;
let mut index_writer: IndexWriter = index.writer(50_000_000)?;

// Let's index our documents!
// We first need a handle on the title and the body field.
Expand All @@ -87,7 +87,7 @@ fn main() -> tantivy::Result<()> {
let title = schema.get_field("title").unwrap();
let body = schema.get_field("body").unwrap();

let mut old_man_doc = Document::default();
let mut old_man_doc = TantivyDocument::default();
old_man_doc.add_text(title, "The Old Man and the Sea");
old_man_doc.add_text(
body,
Expand Down Expand Up @@ -217,8 +217,8 @@ fn main() -> tantivy::Result<()> {
// the document returned will only contain
// a title.
for (_score, doc_address) in top_docs {
let retrieved_doc = searcher.doc(doc_address)?;
println!("{}", schema.to_json(&retrieved_doc));
let retrieved_doc: TantivyDocument = searcher.doc(doc_address)?;
println!("{}", retrieved_doc.to_json(&schema));
}

// We can also get an explanation to understand
Expand Down
4 changes: 2 additions & 2 deletions examples/custom_collector.rs
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ use columnar::Column;
use tantivy::collector::{Collector, SegmentCollector};
use tantivy::query::QueryParser;
use tantivy::schema::{Schema, FAST, INDEXED, TEXT};
use tantivy::{doc, Index, Score, SegmentReader};
use tantivy::{doc, Index, IndexWriter, Score, SegmentReader};

#[derive(Default)]
struct Stats {
Expand Down Expand Up @@ -142,7 +142,7 @@ fn main() -> tantivy::Result<()> {
// this example.
let index = Index::create_in_ram(schema);

let mut index_writer = index.writer(50_000_000)?;
let mut index_writer: IndexWriter = index.writer(50_000_000)?;
index_writer.add_document(doc!(
product_name => "Super Broom 2000",
product_description => "While it is ok for short distance travel, this broom \
Expand Down
8 changes: 4 additions & 4 deletions examples/custom_tokenizer.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ use tantivy::collector::TopDocs;
use tantivy::query::QueryParser;
use tantivy::schema::*;
use tantivy::tokenizer::NgramTokenizer;
use tantivy::{doc, Index};
use tantivy::{doc, Index, IndexWriter};

fn main() -> tantivy::Result<()> {
// # Defining the schema
Expand Down Expand Up @@ -62,7 +62,7 @@ fn main() -> tantivy::Result<()> {
//
// Here we use a buffer of 50MB per thread. Using a bigger
// memory arena for the indexer can increase its throughput.
let mut index_writer = index.writer(50_000_000)?;
let mut index_writer: IndexWriter = index.writer(50_000_000)?;
index_writer.add_document(doc!(
title => "The Old Man and the Sea",
body => "He was an old man who fished alone in a skiff in the Gulf Stream and \
Expand Down Expand Up @@ -103,8 +103,8 @@ fn main() -> tantivy::Result<()> {
let top_docs = searcher.search(&query, &TopDocs::with_limit(10))?;

for (_, doc_address) in top_docs {
let retrieved_doc = searcher.doc(doc_address)?;
println!("{}", schema.to_json(&retrieved_doc));
let retrieved_doc: TantivyDocument = searcher.doc(doc_address)?;
println!("{}", retrieved_doc.to_json(&schema));
}

Ok(())
Expand Down
14 changes: 8 additions & 6 deletions examples/date_time_field.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
use tantivy::collector::TopDocs;
use tantivy::query::QueryParser;
use tantivy::schema::{DateOptions, Schema, Value, INDEXED, STORED, STRING};
use tantivy::Index;
use tantivy::{Index, IndexWriter, TantivyDocument};

fn main() -> tantivy::Result<()> {
// # Defining the schema
Expand All @@ -22,16 +22,18 @@ fn main() -> tantivy::Result<()> {
// # Indexing documents
let index = Index::create_in_ram(schema.clone());

let mut index_writer = index.writer(50_000_000)?;
let mut index_writer: IndexWriter = index.writer(50_000_000)?;
// The dates are passed as string in the RFC3339 format
let doc = schema.parse_document(
let doc = TantivyDocument::parse_json(
&schema,
r#"{
"occurred_at": "2022-06-22T12:53:50.53Z",
"event": "pull-request"
}"#,
)?;
index_writer.add_document(doc)?;
let doc = schema.parse_document(
let doc = TantivyDocument::parse_json(
&schema,
r#"{
"occurred_at": "2022-06-22T13:00:00.22Z",
"event": "comment"
Expand All @@ -58,13 +60,13 @@ fn main() -> tantivy::Result<()> {
let count_docs = searcher.search(&*query, &TopDocs::with_limit(4))?;
assert_eq!(count_docs.len(), 1);
for (_score, doc_address) in count_docs {
let retrieved_doc = searcher.doc(doc_address)?;
let retrieved_doc = searcher.doc::<TantivyDocument>(doc_address)?;
assert!(matches!(
retrieved_doc.get_first(occurred_at),
Some(Value::Date(_))
));
assert_eq!(
schema.to_json(&retrieved_doc),
retrieved_doc.to_json(&schema),
r#"{"event":["comment"],"occurred_at":["2022-06-22T13:00:00.22Z"]}"#
);
}
Expand Down
12 changes: 6 additions & 6 deletions examples/deleting_updating_documents.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,15 +11,15 @@
use tantivy::collector::TopDocs;
use tantivy::query::TermQuery;
use tantivy::schema::*;
use tantivy::{doc, Index, IndexReader};
use tantivy::{doc, Index, IndexReader, IndexWriter};

// A simple helper function to fetch a single document
// given its id from our index.
// It will be helpful to check our work.
fn extract_doc_given_isbn(
reader: &IndexReader,
isbn_term: &Term,
) -> tantivy::Result<Option<Document>> {
) -> tantivy::Result<Option<TantivyDocument>> {
let searcher = reader.searcher();

// This is the simplest query you can think of.
Expand Down Expand Up @@ -69,10 +69,10 @@ fn main() -> tantivy::Result<()> {

let index = Index::create_in_ram(schema.clone());

let mut index_writer = index.writer(50_000_000)?;
let mut index_writer: IndexWriter = index.writer(50_000_000)?;

// Let's add a couple of documents, for the sake of the example.
let mut old_man_doc = Document::default();
let mut old_man_doc = TantivyDocument::default();
old_man_doc.add_text(title, "The Old Man and the Sea");
index_writer.add_document(doc!(
isbn => "978-0099908401",
Expand All @@ -94,7 +94,7 @@ fn main() -> tantivy::Result<()> {
// Oops our frankenstein doc seems misspelled
let frankenstein_doc_misspelled = extract_doc_given_isbn(&reader, &frankenstein_isbn)?.unwrap();
assert_eq!(
schema.to_json(&frankenstein_doc_misspelled),
frankenstein_doc_misspelled.to_json(&schema),
r#"{"isbn":["978-9176370711"],"title":["Frankentein"]}"#,
);

Expand Down Expand Up @@ -136,7 +136,7 @@ fn main() -> tantivy::Result<()> {
// No more typo!
let frankenstein_new_doc = extract_doc_given_isbn(&reader, &frankenstein_isbn)?.unwrap();
assert_eq!(
schema.to_json(&frankenstein_new_doc),
frankenstein_new_doc.to_json(&schema),
r#"{"isbn":["978-9176370711"],"title":["Frankenstein"]}"#,
);

Expand Down
4 changes: 2 additions & 2 deletions examples/faceted_search.rs
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
use tantivy::collector::FacetCollector;
use tantivy::query::{AllQuery, TermQuery};
use tantivy::schema::*;
use tantivy::{doc, Index};
use tantivy::{doc, Index, IndexWriter};

fn main() -> tantivy::Result<()> {
// Let's create a temporary directory for the sake of this example
Expand All @@ -30,7 +30,7 @@ fn main() -> tantivy::Result<()> {
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);

let mut index_writer = index.writer(30_000_000)?;
let mut index_writer: IndexWriter = index.writer(30_000_000)?;

// For convenience, tantivy also comes with a macro to
// reduce the boilerplate above.
Expand Down
Loading