Class CharTokenizer
- All Implemented Interfaces:
Closeable
,AutoCloseable
- Direct Known Subclasses:
LetterTokenizer
,UnicodeWhitespaceTokenizer
,WhitespaceTokenizer
The base class also provides factories to create instances of CharTokenizer
using Java
8 lambdas or method references. It is possible to create an instance which behaves exactly like
LetterTokenizer
:
Tokenizer tok = CharTokenizer.fromTokenCharPredicate(Character::isLetter);
-
Nested Class Summary
Nested classes/interfaces inherited from class org.apache.lucene.util.AttributeSource
AttributeSource.State
-
Field Summary
FieldsModifier and TypeFieldDescriptionprivate int
private int
static final int
private int
private static final int
private final CharacterUtils.CharacterBuffer
private final int
private int
private final OffsetAttribute
private final CharTermAttribute
Fields inherited from class org.apache.lucene.analysis.TokenStream
DEFAULT_TOKEN_ATTRIBUTE_FACTORY
-
Constructor Summary
ConstructorsConstructorDescriptionCreates a newCharTokenizer
instanceCharTokenizer
(AttributeFactory factory) Creates a newCharTokenizer
instanceCharTokenizer
(AttributeFactory factory, int maxTokenLen) Creates a newCharTokenizer
instance -
Method Summary
Modifier and TypeMethodDescriptionfinal void
end()
This method is called by the consumer after the last token has been consumed, afterTokenStream.incrementToken()
returnedfalse
(using the newTokenStream
API).static CharTokenizer
fromSeparatorCharPredicate
(IntPredicate separatorCharPredicate) Creates a new instance of CharTokenizer using a custom predicate, supplied as method reference or lambda expression.static CharTokenizer
fromSeparatorCharPredicate
(AttributeFactory factory, IntPredicate separatorCharPredicate) Creates a new instance of CharTokenizer with the supplied attribute factory using a custom predicate, supplied as method reference or lambda expression.static CharTokenizer
fromTokenCharPredicate
(IntPredicate tokenCharPredicate) Creates a new instance of CharTokenizer using a custom predicate, supplied as method reference or lambda expression.static CharTokenizer
fromTokenCharPredicate
(AttributeFactory factory, IntPredicate tokenCharPredicate) Creates a new instance of CharTokenizer with the supplied attribute factory using a custom predicate, supplied as method reference or lambda expression.final boolean
Consumers (i.e.,IndexWriter
) use this method to advance the stream to the next token.protected abstract boolean
isTokenChar
(int c) Returns true iff a codepoint should be included in a token.void
reset()
This method is called by a consumer before it begins consumption usingTokenStream.incrementToken()
.Methods inherited from class org.apache.lucene.analysis.Tokenizer
close, correctOffset, setReader, setReaderTestPoint
Methods inherited from class org.apache.lucene.util.AttributeSource
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, endAttributes, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, removeAllAttributes, restoreState, toString
-
Field Details
-
offset
private int offset -
bufferIndex
private int bufferIndex -
dataLen
private int dataLen -
finalOffset
private int finalOffset -
DEFAULT_MAX_WORD_LEN
public static final int DEFAULT_MAX_WORD_LEN- See Also:
-
IO_BUFFER_SIZE
private static final int IO_BUFFER_SIZE- See Also:
-
maxTokenLen
private final int maxTokenLen -
termAtt
-
offsetAtt
-
ioBuffer
-
-
Constructor Details
-
CharTokenizer
public CharTokenizer()Creates a newCharTokenizer
instance -
CharTokenizer
Creates a newCharTokenizer
instance- Parameters:
factory
- the attribute factory to use for thisTokenizer
-
CharTokenizer
Creates a newCharTokenizer
instance- Parameters:
factory
- the attribute factory to use for thisTokenizer
maxTokenLen
- maximum token length the tokenizer will emit. Must be greater than 0 and less than MAX_TOKEN_LENGTH_LIMIT (1024*1024)- Throws:
IllegalArgumentException
- if maxTokenLen is invalid.
-
-
Method Details
-
fromTokenCharPredicate
Creates a new instance of CharTokenizer using a custom predicate, supplied as method reference or lambda expression. The predicate should returntrue
for all valid token characters.This factory is intended to be used with lambdas or method references. E.g., an elegant way to create an instance which behaves exactly as
LetterTokenizer
is:Tokenizer tok = CharTokenizer.fromTokenCharPredicate(Character::isLetter);
-
fromTokenCharPredicate
public static CharTokenizer fromTokenCharPredicate(AttributeFactory factory, IntPredicate tokenCharPredicate) Creates a new instance of CharTokenizer with the supplied attribute factory using a custom predicate, supplied as method reference or lambda expression. The predicate should returntrue
for all valid token characters.This factory is intended to be used with lambdas or method references. E.g., an elegant way to create an instance which behaves exactly as
LetterTokenizer
is:Tokenizer tok = CharTokenizer.fromTokenCharPredicate(factory, Character::isLetter);
-
fromSeparatorCharPredicate
Creates a new instance of CharTokenizer using a custom predicate, supplied as method reference or lambda expression. The predicate should returntrue
for all valid token separator characters. This method is provided for convenience to easily use predicates that are negated (they match the separator characters, not the token characters).This factory is intended to be used with lambdas or method references. E.g., an elegant way to create an instance which behaves exactly as
WhitespaceTokenizer
is:Tokenizer tok = CharTokenizer.fromSeparatorCharPredicate(Character::isWhitespace);
-
fromSeparatorCharPredicate
public static CharTokenizer fromSeparatorCharPredicate(AttributeFactory factory, IntPredicate separatorCharPredicate) Creates a new instance of CharTokenizer with the supplied attribute factory using a custom predicate, supplied as method reference or lambda expression. The predicate should returntrue
for all valid token separator characters.This factory is intended to be used with lambdas or method references. E.g., an elegant way to create an instance which behaves exactly as
WhitespaceTokenizer
is:Tokenizer tok = CharTokenizer.fromSeparatorCharPredicate(factory, Character::isWhitespace);
-
isTokenChar
protected abstract boolean isTokenChar(int c) Returns true iff a codepoint should be included in a token. This tokenizer generates as tokens adjacent sequences of codepoints which satisfy this predicate. Codepoints for which this is false are used to define token boundaries and are not included in tokens. -
incrementToken
Description copied from class:TokenStream
Consumers (i.e.,IndexWriter
) use this method to advance the stream to the next token. Implementing classes must implement this method and update the appropriateAttributeImpl
s with the attributes of the next token.The producer must make no assumptions about the attributes after the method has been returned: the caller may arbitrarily change it. If the producer needs to preserve the state for subsequent calls, it can use
AttributeSource.captureState()
to create a copy of the current attribute state.This method is called for every token of a document, so an efficient implementation is crucial for good performance. To avoid calls to
AttributeSource.addAttribute(Class)
andAttributeSource.getAttribute(Class)
, references to allAttributeImpl
s that this stream uses should be retrieved during instantiation.To ensure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in
TokenStream.incrementToken()
.- Specified by:
incrementToken
in classTokenStream
- Returns:
- false for end of stream; true otherwise
- Throws:
IOException
-
end
Description copied from class:TokenStream
This method is called by the consumer after the last token has been consumed, afterTokenStream.incrementToken()
returnedfalse
(using the newTokenStream
API). Streams implementing the old API should upgrade to use this feature.This method can be used to perform any end-of-stream operations, such as setting the final offset of a stream. The final offset of a stream might differ from the offset of the last token eg in case one or more whitespaces followed after the last token, but a WhitespaceTokenizer was used.
Additionally any skipped positions (such as those removed by a stopfilter) can be applied to the position increment, or any adjustment of other attributes where the end-of-stream value may be important.
If you override this method, always call
super.end()
.- Overrides:
end
in classTokenStream
- Throws:
IOException
- If an I/O error occurs
-
reset
Description copied from class:TokenStream
This method is called by a consumer before it begins consumption usingTokenStream.incrementToken()
.Resets this stream to a clean state. Stateful implementations must implement this method so that they can be reused, just as if they had been created fresh.
If you override this method, always call
super.reset()
, otherwise some internal state will not be correctly reset (e.g.,Tokenizer
will throwIllegalStateException
on further usage).- Overrides:
reset
in classTokenizer
- Throws:
IOException
-