Useful for dumping out the input stream after doing some augmentation or other manipulations.

You can insert stuff, replace, and delete chunks. Note that the operations are done lazily--only if you convert the buffer to a String. This is very efficient because you are not moving data around all the time. As the buffer of tokens is converted to strings, the toString() method(s) check to see if there is an operation at the current index. If so, the operation is done and then normal String rendering continues on the buffer. This is like having multiple Turing machine instruction streams (programs) operating on a single input tape. :)

Since the operations are done lazily at toString-time, operations do not screw up the token index values. That is, an insert operation at token index i does not change the index values for tokens i+1..n-1.

Because operations never actually alter the buffer, you may always get the original token stream back without undoing anything. Since the instructions are queued up, you can easily simulate transactions and roll back any changes if there is an error just by removing instructions. For example,

CharStream input = new ANTLRFileStream("input"); TLexer lex = new TLexer(input); TokenRewriteStream tokens = new TokenRewriteStream(lex); T parser = new T(tokens); parser.startRule();

 Then in the rules, you can execute
 Token t,u;
 ...
 input.insertAfter(t, "text to put after t");}
    input.insertAfter(u, "text after u");}
    System.out.println(tokens.toString());

Actually, you have to cast the 'input' to a TokenRewriteStream. :(

You can also have multiple "instruction streams" and get multiple rewrites from a single pass over the input. Just name the instruction streams and use that name again when printing the buffer. This could be useful for generating a C file and also its header file--all from the same buffer:

 tokens.insertAfter("pass1", t, "text to put after t");}
    tokens.insertAfter("pass2", u, "text after u");}
    System.out.println(tokens.toString("pass1"));
    System.out.println(tokens.toString("pass2"));

If you don't use named rewrite streams, a "default" stream is used as the first example shows.

Hierarchy

Implements

Index

Constructors

constructor

Properties

DEFAULT_PROGRAM_NAME

DEFAULT_PROGRAM_NAME: string = "default"

MIN_TOKEN_INDEX

MIN_TOKEN_INDEX: number = 0

PROGRAM_INIT_SIZE

PROGRAM_INIT_SIZE: number = 100

Protected _p

_p: number = -1

The index into the tokens list of the current token (next token to consume). tokens[p] should be LT(1). p=-1 indicates need to initialize with first token. The ctor doesn't get a token. First call to LT(1) or whatever gets the first token and sets p=0;

_tokens

_tokens: List<IToken> = new List<IToken>(100)

Record every single token pulled from the source so we can reproduce chunks of it later. The buffer in LookaheadStream overlaps sometimes as its moving window moves through the input. This list captures everything so we can access complete input text.

channel

channel: number

Skip tokens on any channel but this one; this is how we skip whitespace...

Protected lastRewriteTokenIndexes

lastRewriteTokenIndexes: Dictionary<string, number> = null

Map String (program name) -> Integer index

maxLookBehind

maxLookBehind: number = Number.MAX_VALUE

Protected programs

programs: Dictionary<string, List<RewriteOperation>> = null

You may have multiple, named streams of rewrite operations. I'm calling these things "programs." Maps String (name) -> rewrite (List)

range

range: number = 0

How deep have we gone?

Accessors

count

  • get count(): number

index

  • get index(): number

lastRealToken

lastToken

sourceName

  • get sourceName(): string

tokenSource

Methods

Protected catOpText

  • catOpText(a: string, b: string): string
  • Parameters

    • a: string
    • b: string

    Returns string

consume

  • consume(): void

delete

  • delete(programName: string, from: IToken, to: IToken): void
  • Parameters

    Returns void

deleteProgram

  • deleteProgram(programName?: string): void
  • Reset the program so that no instructions exist

    Parameters

    • Default value programName: string = this.DEFAULT_PROGRAM_NAME

    Returns void

Protected fetch

  • fetch(n: number): void

fill

  • fill(): void

get

Protected getKindOfOps

Protected getLastRewriteTokenIndex

  • getLastRewriteTokenIndex(programName: string): number
  • Parameters

    • programName: string

    Returns number

Protected getProgram

getTokens

  • Given a start and stop index, return a List of all tokens in the token type BitSet. Return null if no tokens were found. This method looks at both on and off channel tokens.

    Parameters

    • start: number
    • stop: number
    • types: BitSet

    Returns List<IToken>

implements

  • implements(): any[]

Protected init

  • init(): void
  • Returns void

insertBefore

  • insertBefore(programName: string, index: number, text: string): void
  • Parameters

    • programName: string
    • index: number
    • text: string

    Returns void

la

  • la(i: number): number

lb

lt

mark

  • mark(): number

Protected reduceToSingleOperationPerIndex

  • We need to combine operations and report invalid operations (like overlapping replaces that are not completed nested). Inserts to same index need to be combined etc... Here are the cases:

    I.i.u I.j.v leave alone, nonoverlapping I.i.u I.i.v combine: Iivu

    R.i-j.u R.x-y.v | i-j in x-y delete first R R.i-j.u R.i-j.v delete first R R.i-j.u R.x-y.v | x-y in i-j ERROR R.i-j.u R.x-y.v | boundaries overlap ERROR

    Delete special case of replace (text==null): D.i-j.u D.x-y.v | boundaries overlap combine to max(min)..max(right)

    I.i.u R.x-y.v | i in (x+1)-y delete I (since insert before we're not deleting i) I.i.u R.x-y.v | i not in (x+1)-y leave alone, nonoverlapping R.x-y.v I.i.u | i in x-y ERROR R.x-y.v I.x.u R.x-y.uv (combine, delete I) R.x-y.v I.i.u | i not in x-y leave alone, nonoverlapping

    I.i.u = insert u before op @ index i R.x-y.u = replace x-y indexed tokens with u

    First we need to examine replaces. For any replace op:

        1. wipe out any insertions before op within that range.
     2. Drop any replace op before that is contained completely within
        that range.
     3. Throw exception upon boundary overlap with any previous replace.
    

    Then we can deal with inserts:

        1. for any inserts to same index, combine even if not adjacent.
        2. for any prior replace with same left boundary, combine this
        insert with replace and delete this replace.
        3. throw exception if index in same range as previous replace
    

    Don't actually delete; make op null in list. Easier to walk list. Later we can throw as we add to index -> op map.

    Note that I.2 R.2-2 will wipe out I.2 even though, technically, the inserted stuff would be before the replace range. But, if you add tokens in front of a method body '{' and then delete the method body, I think the stuff before the '{' you added should disappear too.

    Return a map from token index to operation.

    Parameters

    Returns Dictionary<number, RewriteOperation>

release

  • release(marker: number): void

replace

  • replace(programName: string, from: number, to: number, text: any): void
  • Parameters

    • programName: string
    • from: number
    • to: number
    • text: any

    Returns void

replace2

  • replace2(programName: string, from: IToken, to: IToken, text: any): void
  • Parameters

    Returns void

reset

  • reset(): void

rewind

  • rewind(marker?: number): void

rollback

  • rollback(programName: string, instructionIndex: number): void
  • Rollback the instruction stream for a program so that the indicated instruction (via instructionIndex) is no longer in the stream. UNTESTED!

    Parameters

    • programName: string
    • instructionIndex: number

    Returns void

seek

  • seek(index: number): void

Protected setLastRewriteTokenIndex

  • setLastRewriteTokenIndex(programName: string, i: number): void
  • Parameters

    • programName: string
    • i: number

    Returns void

setup

  • setup(): void

skipOffTokenChannels

  • skipOffTokenChannels(i: number): number

Protected skipOffTokenChannelsReverse

  • skipOffTokenChannelsReverse(i: number): number

Protected sync

  • sync(i: number): void
  • Make sure index i in tokens has a token.

    Parameters

    • i: number

    Returns void

toDebugString

  • toDebugString(start?: number, end?: number): string
  • Parameters

    • Default value start: number = this.MIN_TOKEN_INDEX
    • Default value end: number = this.count - 1

    Returns string

toOriginalString

  • toOriginalString(): string
  • Returns string

toOriginalString2

  • toOriginalString2(start: number, end: number): string
  • Parameters

    • start: number
    • end: number

    Returns string

toString

  • toString(): string

toString2

  • toString2(start: number, stop: number): string

toString3

  • toString3(programName: string, start: number, end: number): string
  • Parameters

    • programName: string
    • start: number
    • end: number

    Returns string

unsertAfter

  • unsertAfter(programName: string, index: number, text: string): void
  • Parameters

    • programName: string
    • index: number
    • text: string

    Returns void

Generated using TypeDoc