Uses of Interface
org.apache.lucene.analysis.tokenattributes.TypeAttribute

Packages that use TypeAttribute
Package
Description
Analyzer for Chinese, Japanese, and Korean, which indexes bigrams.
Fast, general-purpose grammar-based tokenizers.
Analyzer for Simplified Chinese, which indexes words.
Construct n-grams for frequently occurring terms and phrases.
Basic, general-purpose analysis components.
Fast, general-purpose URLs and email addresses tokenizers.
Tokenizer that breaks text into words with the Unicode Text Segmentation algorithm.
MinHash filtering (for LSH).
Miscellaneous Tokenstreams.
Set of components for pattern-based (regex) analysis.
Provides various convenience classes for creating payloads on Tokens.
Word n-gram filters.
Fast, general-purpose grammar-based tokenizer StandardTokenizer implements the Word Break rules from the Unicode Text Segmentation algorithm, as specified in Unicode Standard Annex #29.
Analysis components for Synonyms.
General-purpose attributes for text analysis.
Tokenizer that is aware of Wikipedia syntax.