github.com/caddyserver/caddy/v2/caddyconfig/caddyfile
No package summary is available.
Package
Files: 7. Third party imports: 1. Imports from organisation: 0. Tests: 0. Benchmarks: 0.
Constants
Vars
Interface guard
Types
Adapter
Adapter adapts Caddyfile to Caddy JSON.
| Field name | Field type | Comment |
|---|---|---|
| ServerType |
|
No comment on field. |
Dispenser
Dispenser is a type that dispenses tokens, similarly to a lexer, except that it can do so with some notion of structure. An empty Dispenser is invalid; call NewDispenser to make a proper instance.
| Field name | Field type | Comment |
|---|---|---|
| tokens |
|
No comment on field. |
| cursor |
|
No comment on field. |
| nesting |
|
No comment on field. |
| context |
|
A map of arbitrary context data that can be used to pass through some information to unmarshalers. |
Segment
Segment is a list of tokens which begins with a directive and ends at the end of the directive (either at the end of the line, or at the end of a block it opens).
| Field name | Field type | Comment |
|---|---|---|
| type |
|
No comment on field. |
ServerBlock
ServerBlock associates any number of keys from the head of the server block with tokens, which are grouped by segments.
| Field name | Field type | Comment |
|---|---|---|
| HasBraces |
|
No comment on field. |
| Keys |
|
No comment on field. |
| Segments |
|
No comment on field. |
| IsNamedRoute |
|
No comment on field. |
ServerType
ServerType is a type that can evaluate a Caddyfile and set up a caddy config.
| Field name | Field type | Comment |
|---|---|---|
| type |
|
No comment on field. |
Unmarshaler
Unmarshaler is a type that can unmarshal Caddyfile tokens to
set itself up for a JSON encoding. The goal of an unmarshaler
is not to set itself up for actual use, but to set itself up for
being marshaled into JSON. Caddyfile-unmarshaled values will not
be used directly; they will be encoded as JSON and then used from
that. Implementations may be able to support multiple segments
(instances of their directive or batch of tokens); typically this
means wrapping parsing logic in a loop: for d.Next() { ... }.
More commonly, only a single segment is supported, so a simple
d.Next() at the start should be used to consume the module
identifier token (directive name, etc).
| Field name | Field type | Comment |
|---|---|---|
| type |
|
No comment on field. |
lexer, Token
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| reader |
|
No comment on field. |
| token |
|
No comment on field. |
| line |
|
No comment on field. |
| skippedLines |
|
No comment on field. |
| File |
|
No comment on field. |
| imports |
|
No comment on field. |
| Line |
|
No comment on field. |
| Text |
|
No comment on field. |
| wasQuoted |
|
No comment on field. |
| heredocMarker |
|
No comment on field. |
| snippetName |
|
No comment on field. |
adjacency
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| type |
|
No comment on field. |
importGraph
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| nodes |
|
No comment on field. |
| edges |
|
No comment on field. |
parser
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
|
No comment on field. | |
| block |
|
No comment on field. |
| eof |
|
No comment on field. |
| definedSnippets |
|
No comment on field. |
| nesting |
|
No comment on field. |
| importGraph |
|
No comment on field. |
Functions
func Format
Format formats the input Caddyfile to a standard, nice-looking appearance. It works by reading each rune of the input and taking control over all the bracing and whitespace that is written; otherwise, words, comments, placeholders, and escaped characters are all treated literally and written as they appear in the input.
Uses: bytes.Buffer, bytes.NewReader, bytes.TrimSpace, io.EOF, slices.Equal, unicode.IsSpace.func FormattingDifference
FormattingDifference returns a warning and true if the formatted version is any different from the input; empty warning and false otherwise. TODO: also perform this check on imported files
Uses: bytes.Equal, bytes.Replace, caddyconfig.Warning.func NewDispenser
NewDispenser returns a Dispenser filled with the given tokens.
func NewTestDispenser
NewTestDispenser parses input into tokens and creates a new Dispenser for test purposes only; any errors are fatal.
Uses: io.EOF, log.Fatalf.func Parse
Parse parses the input just enough to group tokens, in order, by server block. No further parsing is performed. Server blocks are returned in the order in which they appear. Directives that do not appear in validDirectives will cause an error. If you do not want to check for valid directives, pass in nil instead.
Environment variables in {$ENVIRONMENT_VARIABLE} notation will be replaced before parsing begins.
func Tokenize
Tokenize takes bytes as input and lexes it into
a list of tokens that can be parsed as a Caddyfile.
Also takes a filename to fill the token's File as
the source of the tokens, which is important to
determine relative paths for import directives.
func UnmarshalModule
UnmarshalModule instantiates a module with the given ID and invokes UnmarshalCaddyfile on the new value using the immediate next segment of d as input. In other words, d's next token should be the first token of the module's Caddyfile input.
This function is used when the next segment of Caddyfile tokens belongs to another Caddy module. The returned value is often type-asserted to the module's associated type for practical use when setting up a config.
func (*Dispenser) AllArgs
AllArgs is like Args, but if there are more argument tokens available than there are targets, false is returned. The number of available argument tokens must match the number of targets exactly to return true.
func (*Dispenser) ArgErr
ArgErr returns an argument error, meaning that another argument was expected but not found. In other words, a line break or open curly brace was encountered instead of an argument.
func (*Dispenser) Args
Args is a convenience function that loads the next arguments (tokens on the same line) into an arbitrary number of strings pointed to in targets. If there are not enough argument tokens available to fill targets, false is returned and the remaining targets are left unchanged. If all the targets are filled, then true is returned.
func (*Dispenser) CountRemainingArgs
CountRemainingArgs counts the amount of remaining arguments (tokens on the same line) without consuming the tokens.
func (*Dispenser) Delete
Delete deletes the current token and returns the updated slice of tokens. The cursor is not advanced to the next token. Because deletion modifies the underlying slice, this method should only be called if you have access to the original slice of tokens and/or are using the slice of tokens outside this Dispenser instance. If you do not re-assign the slice with the return value of this method, inconsistencies in the token array will become apparent (or worse, hide from you like they did me for 3 and a half freaking hours late one night).
func (*Dispenser) DeleteN
DeleteN is the same as Delete, but can delete many tokens at once. If there aren't N tokens available to delete, none are deleted.
func (*Dispenser) EOFErr
EOFErr returns an error indicating that the dispenser reached the end of the input when searching for the next token.
func (*Dispenser) Err
Err generates a custom parse-time error with a message of msg.
Uses: errors.New.func (*Dispenser) Errf
Errf is like Err, but for formatted error messages
Uses: fmt.Errorf.func (*Dispenser) File
File gets the filename where the current token originated.
func (*Dispenser) GetContext
GetContext gets the value of a key in the context map.
func (*Dispenser) GetContextString
GetContextString gets the value of a key in the context map as a string, or an empty string if the key does not exist.
func (*Dispenser) Line
Line gets the line number of the current token. If there is no token loaded, it returns 0.
func (*Dispenser) Nesting
Nesting returns the current nesting level. Necessary if using NextBlock()
func (*Dispenser) NewFromNextSegment
NewFromNextSegment returns a new dispenser with a copy of the tokens from the current token until the end of the "directive" whether that be to the end of the line or the end of a block that starts at the end of the line; in other words, until the end of the segment.
func (*Dispenser) Next
Next loads the next token. Returns true if a token was loaded; false otherwise. If false, all tokens have been consumed.
func (*Dispenser) NextArg
NextArg loads the next token if it is on the same line and if it is not a block opening (open curly brace). Returns true if an argument token was loaded; false otherwise. If false, all tokens on the line have been consumed except for potentially a block opening. It handles imported tokens correctly.
func (*Dispenser) NextBlock
NextBlock can be used as the condition of a for loop to load the next token as long as it opens a block or is already in a block nested more than initialNestingLevel. In other words, a loop over NextBlock() will iterate all tokens in the block assuming the next token is an open curly brace, until the matching closing brace. The open and closing brace tokens for the outer-most block will be consumed internally and omitted from the iteration.
Proper use of this method looks like this:
for nesting := d.Nesting(); d.NextBlock(nesting); {
}
However, in simple cases where it is known that the Dispenser is new and has not already traversed state by a loop over NextBlock(), this will do:
for d.NextBlock(0) {
}
As with other token parsing logic, a loop over NextBlock() should be contained within a loop over Next(), as it is usually prudent to skip the initial token.
func (*Dispenser) NextLine
NextLine loads the next token only if it is not on the same line as the current token, and returns true if a token was loaded; false otherwise. If false, there is not another token or it is on the same line. It handles imported tokens correctly.
func (*Dispenser) NextSegment
NextSegment returns a copy of the tokens from the current token until the end of the line or block that starts at the end of the line.
func (*Dispenser) Prev
Prev moves to the previous token. It does the inverse of Next(), except this function may decrement the cursor to -1 so that the next call to Next() points to the first token; this allows dispensing to "start over". This method returns true if the cursor ends up pointing to a valid token.
func (*Dispenser) RemainingArgs
RemainingArgs loads any more arguments (tokens on the same line) into a slice and returns them. Open curly brace tokens also indicate the end of arguments, and the curly brace is not included in the return value nor is it loaded.
func (*Dispenser) RemainingArgsRaw
RemainingArgsRaw loads any more arguments (tokens on the same line, retaining quotes) into a slice and returns them. Open curly brace tokens also indicate the end of arguments, and the curly brace is not included in the return value nor is it loaded.
func (*Dispenser) Reset
Reset sets d's cursor to the beginning, as if this was a new and unused dispenser.
func (*Dispenser) ScalarVal
ScalarVal gets value of the current token, converted to the closest scalar type. If there is no token loaded, it returns nil.
Uses: strconv.Atoi, strconv.ParseBool, strconv.ParseFloat.func (*Dispenser) SetContext
SetContext sets a key-value pair in the context map.
func (*Dispenser) SyntaxErr
SyntaxErr creates a generic syntax error which explains what was found and what was expected.
Uses: errors.New, fmt.Sprintf, strings.Join.func (*Dispenser) Token
Token returns the current token.
func (*Dispenser) Val
Val gets the text of the current token. If there is no token loaded, it returns empty string.
func (*Dispenser) ValRaw
ValRaw gets the raw text of the current token (including quotes). If the token was a heredoc, then the delimiter is not included, because that is not relevant to any unmarshaling logic at this time. If there is no token loaded, it returns empty string.
func (*Dispenser) WrapErr
WrapErr takes an existing error and adds the Caddyfile file and line number.
Uses: fmt.Errorf, strings.Join.func (Adapter) Adapt
Adapt converts the Caddyfile config in body to Caddy JSON.
Uses: fmt.Errorf, json.Marshal.func (Segment) Directive
Directive returns the directive name for the segment. The directive name is the text of the first token.
func (ServerBlock) DispenseDirective
DispenseDirective returns a dispenser that contains all the tokens in the server block.
func (ServerBlock) GetKeysText
func (Token) Clone
Clone returns a deep copy of the token.
func (Token) NumLineBreaks
NumLineBreaks counts how many line breaks are in the token text.
Uses: strings.Count.func (Token) Quoted
Quoted returns true if the token was enclosed in quotes (i.e. double quotes, backticks, or heredoc).
Private functions
func allTokens
allTokens lexes the entire input, but does not parse it. It returns all the tokens from the input, unstructured and in order. It may mutate input as it expands env vars.
func isNextOnNewLine
isNextOnNewLine tests whether t2 is on a different line from t1
func makeArgsReplacer
makeArgsReplacer prepares a Replacer which can replace non-variadic args placeholders in imported tokens.
References: strconv.Atoi, strconv.Itoa, strings.Contains.func parseVariadic
parseVariadic determines if the token is a variadic placeholder, and if so, determines the index range (start/end) of args to use. Returns a boolean signaling whether a variadic placeholder was found, and the start and end indices.
References: strconv.Atoi, strconv.Itoa, strings.Contains, strings.Cut, strings.HasPrefix, strings.HasSuffix, strings.TrimPrefix, strings.TrimSuffix, zap.String, zap.Strings.func replaceEnvVars
replaceEnvVars replaces all occurrences of environment variables. It mutates the underlying array and returns the updated slice.
References: bytes.Index, os.LookupEnv, strings.SplitN.func isNewLine
isNewLine determines whether the current token is on a different line (higher line number) than the previous token. It handles imported tokens correctly. If there isn't a previous token, it returns true.
func isNextOnNewLine
isNextOnNewLine determines whether the current token is on a different line (higher line number) than the next token. It handles imported tokens correctly. If there isn't a next token, it returns true.
func nextOnSameLine
nextOnSameLine advances the cursor if the next token is on the same line of the same file.
func addEdge
func addEdges
func addNode
func addNodes
func areConnected
func exists
func removeNode
func removeNodes
func willCycle
func finalizeHeredoc
finalizeHeredoc takes the runes read as the heredoc text and the marker, and processes the text to strip leading whitespace, returning the final value without the leading whitespace.
References: fmt.Errorf, strings.Index, strings.LastIndex, strings.ReplaceAll, strings.Split.func load
load prepares the lexer to scan an input for tokens. It discards any leading byte order mark.
References: bufio.NewReader.func next
next loads the next token into the lexer. A token is delimited by whitespace, unless the token starts with a quotes character (") in which case the token goes until the closing quotes (the enclosing quotes are not included). Inside quoted strings, quotes may be escaped with a preceding \ character. No other chars may be escaped. The rest of the line is skipped if a "#" character is read in. Returns true if a token was loaded; false otherwise.
References: fmt.Errorf, io.EOF, unicode.IsSpace.func addresses
func begin
func blockContents
func blockTokens
read and store everything in a block for later replay.
func closeCurlyBrace
closeCurlyBrace expects the current token to be a closing curly brace. This acts like an assertion because it returns an error if the token is not a closing curly brace. It does NOT advance the token.
func directive
directive collects tokens until the directive's scope closes (either end of line or end of curly brace block). It expects the currently-loaded token to be a directive (or } that ends a server block). The collected tokens are loaded into the current server block for later use by directive setup functions.
func directives
directives parses through all the lines for directives and it expects the next token to be the first directive. It goes until EOF or closing curly brace which ends the server block.
func doImport
doImport swaps out the import directive and its argument (a total of 2 tokens) with the tokens in the specified file or globbing pattern. When the function returns, the cursor is on the token before where the import directive was. In other words, call Next() to access the first token that was imported.
References: filepath.Dir, filepath.Glob, filepath.IsAbs, filepath.Join, filepath.Separator, fmt.Sprintf, strings.Contains, strings.ContainsAny, strings.Count, strings.HasPrefix, strings.HasSuffix, strings.Split, strings.TrimPrefix, strings.TrimSuffix, zap.String.func doSingleImport
doSingleImport lexes the individual file at importFile and returns its tokens or an error, if any.
References: io.ReadAll, os.Open, strings.TrimSpace, zap.String.func isNamedRoute
func isSnippet
func openCurlyBrace
openCurlyBrace expects the current token to be an opening curly brace. This acts like an assertion because it returns an error if the token is not a opening curly brace. It does NOT advance the token.
func parseAll
func parseOne
Tests
Files: 4. Third party imports: 0. Imports from organisation: 0. Tests: 21. Benchmarks: 0.