github.com/TykTechnologies/tyk-pump/pumps
No package summary is available.
Package
Files: 32. Third party imports: 46. Imports from organisation: 4. Tests: 0. Benchmarks: 0.
Constants
Vars
Types
ApiKeyTransport
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| APIKey |
|
No comment on field. |
| APIKeyID |
|
No comment on field. |
BaseMongoConf
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
Prefix for the environment variables that will be used to override the configuration.
Defaults to |
| MongoURL |
|
The full URL to your MongoDB instance, this can be a clustered instance if necessary and should include the database and username / password data. |
| MongoUseSSL |
|
Set to true to enable Mongo SSL connection. |
| MongoSSLInsecureSkipVerify |
|
Allows the use of self-signed certificates when connecting to an encrypted MongoDB database. |
| MongoSSLAllowInvalidHostnames |
|
Ignore hostname check when it differs from the original (for example with SSH tunneling). The rest of the TLS verification will still be performed. |
| MongoSSLCAFile |
|
Path to the PEM file with trusted root certificates |
| MongoSSLPEMKeyfile |
|
Path to the PEM file which contains both client certificate and private key. This is required for Mutual TLS. |
| MongoDBType |
|
Specifies the mongo DB Type. If it's 0, it means that you are using standard mongo db. If it's 1 it means you are using AWS Document DB. If it's 2, it means you are using CosmosDB. Defaults to Standard mongo (0). |
| OmitIndexCreation |
|
Set to true to disable the default tyk index creation. |
| MongoSessionConsistency |
|
Set the consistency mode for the session, it defaults to |
| MongoDriverType |
|
MongoDriverType is the type of the driver (library) to use. The valid values are: “mongo-go” and “mgo”. Since v1.9, the default driver is "mongo-go". Check out this guide to learn about MongoDB drivers supported by Tyk Pump. |
| MongoDirectConnection |
|
MongoDirectConnection informs whether to establish connections only with the specified seed servers, or to obtain information for the whole cluster and establish connections with further servers too. If true, the client will only connect to the host provided in the ConnectionString and won't attempt to discover other hosts in the cluster. Useful when network restrictions prevent discovery, such as with SSH tunneling. Default is false. |
CSVConf
@PumpConf CSV
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
The prefix for the environment variables that will be used to override the configuration.
Defaults to |
| CSVDir |
|
The directory and the filename where the CSV data will be stored. |
CSVPump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| csvConf |
|
No comment on field. |
| wroteHeaders |
|
No comment on field. |
|
No comment on field. |
CommonPumpConfig
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| filters |
|
No comment on field. |
| timeout |
|
No comment on field. |
| maxRecordSize |
|
No comment on field. |
| OmitDetailedRecording |
|
No comment on field. |
| log |
|
No comment on field. |
| ignoreFields |
|
No comment on field. |
| decodeResponseBase64 |
|
No comment on field. |
| decodeRequestBase64 |
|
No comment on field. |
CustomMetrics
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| type |
|
No comment on field. |
DogStatsdConf
@PumpConf DogStatsd
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
The prefix for the environment variables that will be used to override the configuration.
Defaults to |
| Namespace |
|
Prefix for your metrics to datadog. |
| Address |
|
Address of the datadog agent including host & port. |
| SampleRate |
|
Defaults to |
| AsyncUDS |
|
Enable async UDS over UDP https://github.com/Datadog/datadog-go#unix-domain-sockets-client. |
| AsyncUDSWriteTimeout |
|
Integer write timeout in seconds if |
| Buffered |
|
Enable buffering of messages. |
| BufferedMaxMessages |
|
Max messages in single datagram if |
| Tags |
|
List of tags to be added to the metric. The possible options are listed in the below example. If no tag is specified the fallback behavior is to use the below tags:
Note that this configuration can generate significant charges due to the unbound nature of
the
On startup, you should see the loaded configs when initializing the dogstatsd pump
|
DogStatsdPump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| conf |
|
No comment on field. |
| client |
|
No comment on field. |
|
No comment on field. |
DummyPump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
|
No comment on field. |
Elasticsearch3Operator
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| esClient |
|
No comment on field. |
| bulkProcessor |
|
No comment on field. |
| log |
|
No comment on field. |
Elasticsearch5Operator
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| esClient |
|
No comment on field. |
| bulkProcessor |
|
No comment on field. |
| log |
|
No comment on field. |
Elasticsearch6Operator
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| esClient |
|
No comment on field. |
| bulkProcessor |
|
No comment on field. |
| log |
|
No comment on field. |
Elasticsearch7Operator
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| esClient |
|
No comment on field. |
| bulkProcessor |
|
No comment on field. |
| log |
|
No comment on field. |
ElasticsearchBulkConfig
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| Workers |
|
Number of workers. Defaults to 1. |
| FlushInterval |
|
Specifies the time in seconds to flush the data and send it to ES. Default disabled. |
| BulkActions |
|
Specifies the number of requests needed to flush the data and send it to ES. Defaults to 1000 requests. If it is needed, can be disabled with -1. |
| BulkSize |
|
Specifies the size (in bytes) needed to flush the data and send it to ES. Defaults to 5MB. If it is needed, can be disabled with -1. |
ElasticsearchConf
@PumpConf Elasticsearch
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
The prefix for the environment variables that will be used to override the configuration.
Defaults to |
| IndexName |
|
The name of the index that all the analytics data will be placed in. Defaults to "tyk_analytics". |
| ElasticsearchURL |
|
If sniffing is disabled, the URL that all data will be sent to. Defaults to "http://localhost:9200". |
| EnableSniffing |
|
If sniffing is enabled, the "elasticsearch_url" will be used to make a request to get a
list of all the nodes in the cluster, the returned addresses will then be used. Defaults to
|
| DocumentType |
|
The type of the document that is created in ES. Defaults to "tyk_analytics". |
| RollingIndex |
|
Appends the date to the end of the index name, so each days data is split into a different
index name. E.g. tyk_analytics-2016.02.28. Defaults to |
| ExtendedStatistics |
|
If set to |
| GenerateID |
|
When enabled, generate _id for outgoing records. This prevents duplicate records when retrying ES. |
| DecodeBase64 |
|
Allows for the base64 bits to be decode before being passed to ES. |
| Version |
|
Specifies the ES version. Use "3" for ES 3.X, "5" for ES 5.X, "6" for ES 6.X, "7" for ES 7.X . Defaults to "3". |
| DisableBulk |
|
Disable batch writing. Defaults to false. |
| BulkConfig |
|
Batch writing trigger configuration. Each option is an OR with eachother: |
| AuthAPIKeyID |
|
API Key ID used for APIKey auth in ES. It's send to ES in the Authorization header as ApiKey base64(auth_api_key_id:auth_api_key) |
| AuthAPIKey |
|
API Key used for APIKey auth in ES. It's send to ES in the Authorization header as ApiKey base64(auth_api_key_id:auth_api_key) |
| Username |
|
Basic auth username. It's send to ES in the Authorization header as username:password encoded in base64. |
| Password |
|
Basic auth password. It's send to ES in the Authorization header as username:password encoded in base64. |
| UseSSL |
|
Enables SSL connection. |
| SSLInsecureSkipVerify |
|
Controls whether the pump client verifies the Elastic Search server's certificate chain and hostname. |
| SSLCertFile |
|
Can be used to set custom certificate file for authentication with Elastic Search. |
| SSLKeyFile |
|
Can be used to set custom key file for authentication with Elastic Search. |
ElasticsearchOperator
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| type |
|
No comment on field. |
ElasticsearchPump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| operator |
|
No comment on field. |
| esConf |
|
No comment on field. |
|
No comment on field. |
GraphMongoPump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
|
No comment on field. | |
|
No comment on field. |
GraphSQLAggregatePump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| SQLConf |
|
No comment on field. |
| db |
|
No comment on field. |
|
No comment on field. |
GraphSQLConf
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| TableName |
|
TableName is a configuration field unique to the sql-graph pump, this field specifies the name of the sql table to be created/used for the pump in the cases of non-sharding in the case of sharding, it specifies the table prefix |
|
No comment on field. |
GraphSQLPump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| db |
|
No comment on field. |
| Conf |
|
No comment on field. |
| tableName |
|
No comment on field. |
|
No comment on field. |
GraylogConf
@PumpConf Graylog
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
The prefix for the environment variables that will be used to override the configuration.
Defaults to |
| GraylogHost |
|
Graylog host. |
| GraylogPort |
|
Graylog port. |
| Tags |
|
List of tags to be added to the metric. The possible options are listed in the below example. If no tag is specified the fallback behaviour is to don't send anything. The possible values are:
|
GraylogPump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| client |
|
No comment on field. |
| conf |
|
No comment on field. |
|
No comment on field. |
HybridPump
HybridPump allows to send analytics to MDCB over RPC
| Field name | Field type | Comment |
|---|---|---|
|
No comment on field. | |
| clientSingleton |
|
No comment on field. |
| dispatcher |
|
No comment on field. |
| clientIsConnected |
|
No comment on field. |
| funcClientSingleton |
|
No comment on field. |
| hybridConfig |
|
No comment on field. |
HybridPumpConf
@PumpConf Hybrid
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
The prefix for the environment variables that will be used to override the configuration.
Defaults to |
| ConnectionString |
|
MDCB URL connection string |
| RPCKey |
|
Your organization ID to connect to the MDCB installation. |
| APIKey |
|
This the API key of a user used to authenticate and authorize the Hybrid Pump access through MDCB. The user should be a standard Dashboard user with minimal privileges so as to reduce any risk if the user is compromised. |
| IgnoreTagPrefixList |
|
Specifies prefixes of tags that should be ignored if |
| CallTimeout |
|
Hybrid pump RPC calls timeout in seconds. Defaults to |
| RPCPoolSize |
|
Hybrid pump connection pool size. Defaults to |
| aggregationTime |
|
aggregationTime is to specify the frequency of the aggregation in minutes if |
| Aggregated |
|
Send aggregated analytics data to Tyk MDCB |
| TrackAllPaths |
|
Specifies if it should store aggregated data for all the endpoints if |
| StoreAnalyticsPerMinute |
|
Determines if the aggregations should be made per minute (true) or per hour (false) if |
| UseSSL |
|
Use SSL to connect to Tyk MDCB |
| SSLInsecureSkipVerify |
|
Skip SSL verification |
Influx2Conf
@PumpConf Influx2
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
The prefix for the environment variables that will be used to override the configuration.
Defaults to |
| BucketName |
|
InfluxDB2 pump bucket name. |
| OrgName |
|
InfluxDB2 pump organization name. |
| Addr |
|
InfluxDB2 pump host. |
| Token |
|
InfluxDB2 pump database token. |
| Fields |
|
Define which Analytics fields should be sent to InfluxDB2. Check the available
fields in the example below. Default value is |
| Tags |
|
List of tags to be added to the metric. |
| Flush |
|
Flush data to InfluxDB2 as soon as the pump receives it |
| CreateMissingBucket |
|
Create the bucket if it doesn't exist |
| NewBucketConfig |
|
New bucket configuration |
Influx2Pump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| dbConf |
|
No comment on field. |
| client |
|
No comment on field. |
|
No comment on field. |
InfluxConf
@PumpConf Influx
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
The prefix for the environment variables that will be used to override the configuration.
Defaults to |
| DatabaseName |
|
InfluxDB pump database name. |
| Addr |
|
InfluxDB pump host. |
| Username |
|
InfluxDB pump database username. |
| Password |
|
InfluxDB pump database password. |
| Fields |
|
Define which Analytics fields should be sent to InfluxDB. Check the available
fields in the example below. Default value is |
| Tags |
|
List of tags to be added to the metric. |
InfluxPump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| dbConf |
|
No comment on field. |
|
No comment on field. |
Json
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| type |
|
No comment on field. |
KafkaConf
@PumpConf Kafka
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
The prefix for the environment variables that will be used to override the configuration.
Defaults to |
| Broker |
|
The list of brokers used to discover the partitions available on the kafka cluster. E.g. "localhost:9092". |
| ClientId |
|
Unique identifier for client connections established with Kafka. |
| Topic |
|
The topic that the writer will produce messages to. |
| Timeout |
|
Timeout is the maximum amount of seconds to wait for a connect or write to complete. |
| Compressed |
|
Enable "github.com/golang/snappy" codec to be used to compress Kafka messages. By default
is |
| MetaData |
|
Can be used to set custom metadata inside the kafka message. |
| UseSSL |
|
Enables SSL connection. |
| SSLInsecureSkipVerify |
|
Controls whether the pump client verifies the kafka server's certificate chain and host name. |
| SSLCertFile |
|
Can be used to set custom certificate file for authentication with kafka. |
| SSLKeyFile |
|
Can be used to set custom key file for authentication with kafka. |
| SASLMechanism |
|
SASL mechanism configuration. Only "plain" and "scram" are supported. |
| Username |
|
SASL username. |
| Password |
|
SASL password. |
| Algorithm |
|
SASL algorithm. It's the algorithm specified for scram mechanism. It could be sha-512 or sha-256. Defaults to "sha-256". |
KafkaPump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| kafkaConf |
|
No comment on field. |
| writerConfig |
|
No comment on field. |
| log |
|
No comment on field. |
|
No comment on field. |
KinesisConf
@PumpConf Kinesis
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
The prefix for the environment variables that will be used to override the configuration.
Defaults to |
| StreamName |
|
A name to identify the stream. The stream name is scoped to the AWS account used by the application that creates the stream. It is also scoped by AWS Region. That is, two streams in two different AWS accounts can have the same name. Two streams in the same AWS account but in two different Regions can also have the same name. |
| Region |
|
AWS Region the Kinesis stream targets |
| BatchSize |
|
Each PutRecords (the function used in this pump)request can support up to 500 records. Each record in the request can be as large as 1 MiB, up to a limit of 5 MiB for the entire request, including partition keys. Each shard can support writes up to 1,000 records per second, up to a maximum data write total of 1 MiB per second. |
KinesisPump
KinesisPump is a Tyk Pump that sends analytics records to AWS Kinesis.
| Field name | Field type | Comment |
|---|---|---|
| client |
|
No comment on field. |
| kinesisConf |
|
No comment on field. |
| log |
|
No comment on field. |
|
No comment on field. |
LogzioPump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| sender |
|
No comment on field. |
| config |
|
No comment on field. |
|
No comment on field. |
LogzioPumpConfig
@PumpConf Logzio
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
The prefix for the environment variables that will be used to override the configuration.
Defaults to |
| CheckDiskSpace |
|
Set the sender to check if it crosses the maximum allowed disk usage. Default value is
|
| DiskThreshold |
|
Set disk queue threshold, once the threshold is crossed the sender will not enqueue the
received logs. Default value is |
| DrainDuration |
|
Set drain duration (flush logs on disk). Default value is |
| QueueDir |
|
The directory for the queue. |
| Token |
|
Token for sending data to your logzio account. |
| URL |
|
If you do not want to use the default Logzio url i.e. when using a proxy. Default is
|
MoesifConf
@PumpConf Moesif
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
The prefix for the environment variables that will be used to override the configuration.
Defaults to |
| ApplicationID |
|
Moesif Application Id. You can find your Moesif Application Id from Moesif Dashboard -> Top Right Menu -> API Keys . Moesif recommends creating separate Application Ids for each environment such as Production, Staging, and Development to keep data isolated. |
| RequestHeaderMasks |
|
An option to mask a specific request header field. |
| ResponseHeaderMasks |
|
An option to mask a specific response header field. |
| RequestBodyMasks |
|
An option to mask a specific - request body field. |
| ResponseBodyMasks |
|
An option to mask a specific response body field. |
| DisableCaptureRequestBody |
|
An option to disable logging of request body. Default value is |
| DisableCaptureResponseBody |
|
An option to disable logging of response body. Default value is |
| UserIDHeader |
|
An optional field name to identify User from a request or response header. |
| CompanyIDHeader |
|
An optional field name to identify Company (Account) from a request or response header. |
| EnableBulk |
|
Set this to |
| BulkConfig |
|
Batch writing trigger configuration.
|
| AuthorizationHeaderName |
|
An optional request header field name to used to identify the User in Moesif. Default value
is |
| AuthorizationUserIdField |
|
An optional field name use to parse the User from authorization header in Moesif. Default
value is |
MoesifPump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| moesifAPI |
|
No comment on field. |
| moesifConf |
|
No comment on field. |
| filters |
|
No comment on field. |
| timeout |
|
No comment on field. |
| samplingPercentage |
|
No comment on field. |
| eTag |
|
No comment on field. |
| lastUpdatedTime |
|
No comment on field. |
| appConfig |
|
No comment on field. |
| userSampleRateMap |
|
No comment on field. |
| companySampleRateMap |
|
No comment on field. |
|
No comment on field. |
MongoAggregateConf
@PumpConf MongoAggregate
| Field name | Field type | Comment |
|---|---|---|
|
TYKCONFIGEXPAND |
|
| UseMixedCollection |
|
If set to |
| TrackAllPaths |
|
Specifies if it should store aggregated data for all the endpoints. By default, |
| IgnoreTagPrefixList |
|
Specifies prefixes of tags that should be ignored. |
| ThresholdLenTagList |
|
Determines the threshold of amount of tags of an aggregation. If the amount of tags is superior to the threshold, it will print an alert. Defaults to 1000. |
| StoreAnalyticsPerMinute |
|
Determines if the aggregations should be made per minute (true) or per hour (false). |
| AggregationTime |
|
Determines the amount of time the aggregations should be made (in minutes). It defaults to the max value is 60 and the minimum is 1. If StoreAnalyticsPerMinute is set to true, this field will be skipped. |
| EnableAggregateSelfHealing |
|
Determines if the self healing will be activated or not. Self Healing allows pump to handle Mongo document's max-size errors by creating a new document when the max-size is reached. It also divide by 2 the AggregationTime field to avoid the same error in the future. |
| IgnoreAggregationsList |
|
This list determines which aggregations are going to be dropped and not stored in the collection. Posible values are: "APIID","errors","versions","apikeys","oauthids","geo","tags","endpoints","keyendpoints", "oauthendpoints", and "apiendpoints". |
MongoAggregatePump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| store |
|
No comment on field. |
| dbConf |
|
No comment on field. |
|
No comment on field. |
MongoConf
@PumpConf Mongo
| Field name | Field type | Comment |
|---|---|---|
|
TYKCONFIGEXPAND |
|
| CollectionName |
|
Specifies the mongo collection name. |
| MaxInsertBatchSizeBytes |
|
Maximum insert batch size for mongo selective pump. If the batch we are writing surpasses this value, it will be sent in multiple batches. Defaults to 10Mb. |
| MaxDocumentSizeBytes |
|
Maximum document size. If the document exceed this value, it will be skipped. Defaults to 10Mb. |
| CollectionCapMaxSizeBytes |
|
Amount of bytes of the capped collection in 64bits architectures. Defaults to 5GB. |
| CollectionCapEnable |
|
Enable collection capping. It's used to set a maximum size of the collection. |
MongoPump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| IsUptime |
|
No comment on field. |
| store |
|
No comment on field. |
| dbConf |
|
No comment on field. |
|
No comment on field. |
MongoSelectiveConf
@PumpConf MongoSelective
| Field name | Field type | Comment |
|---|---|---|
|
TYKCONFIGEXPAND |
|
| MaxInsertBatchSizeBytes |
|
Maximum insert batch size for mongo selective pump. If the batch we are writing surpass this value, it will be send in multiple batchs. Defaults to 10Mb. |
| MaxDocumentSizeBytes |
|
Maximum document size. If the document exceed this value, it will be skipped. Defaults to 10Mb. |
MongoSelectivePump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| store |
|
No comment on field. |
| dbConf |
|
No comment on field. |
|
No comment on field. |
MongoType
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| type |
|
No comment on field. |
MysqlConfig
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| DefaultStringSize |
|
Default size for string fields. Defaults to |
| DisableDatetimePrecision |
|
Disable datetime precision, which not supported before MySQL 5.6. |
| DontSupportRenameIndex |
|
Drop & create when rename index, rename index not supported before MySQL 5.7, MariaDB. |
| DontSupportRenameColumn |
|
|
| SkipInitializeWithVersion |
|
Auto configure based on currently MySQL version. |
NewBucket
Configuration required to create the Bucket if it doesn't already exist See https://docs.influxdata.com/influxdb/v2.1/api/#operation/PostBuckets
| Field name | Field type | Comment |
|---|---|---|
| Description |
|
A description visible on the InfluxDB2 UI |
| RetentionRules |
|
Rules to expire or retain data. No rules means data never expires. |
PostgresConfig
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| PreferSimpleProtocol |
|
Disables implicit prepared statement usage. |
PrometheusConf
@PumpConf Prometheus
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
Prefix for the environment variables that will be used to override the configuration.
Defaults to |
| Addr |
|
The full URL to your Prometheus instance, {HOST}:{PORT}. For example |
| Path |
|
The path to the Prometheus collection. For example |
| AggregateObservations |
|
This will enable an experimental feature that will aggregate the histogram metrics request time values before exposing them to prometheus. Enabling this will reduce the CPU usage of your prometheus pump but you will loose histogram precision. Experimental. |
| DisabledMetrics |
|
Metrics to exclude from exposition. Currently, excludes only the base metrics. |
| TrackAllPaths |
|
Specifies if it should expose aggregated metrics for all the endpoints. By default, |
| CustomMetrics |
|
Custom Prometheus metrics. |
PrometheusMetric
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| Name |
|
The name of the custom metric. For example: |
| Help |
|
Description text of the custom metric. For example: |
| MetricType |
|
Determines the type of the metric. There's currently 2 available options: |
| Buckets |
|
Defines the buckets into which observations are counted. The type is float64 array and by default, [1, 2, 5, 7, 10, 15, 20, 25, 30, 40, 50, 60, 70, 80, 90, 100, 200, 300, 400, 500, 1000, 2000, 5000, 10000, 30000, 60000] |
| ObfuscateAPIKeys |
|
Controls whether the pump client should hide the API key. In case you still need substring
of the value, check the next option. Default value is |
| ObfuscateAPIKeysLength |
|
Define the number of the characters from the end of the API key. The |
| Labels |
|
Defines the partitions in the metrics. For example: ['response_code','api_name'].
The available labels are: |
| enabled |
|
No comment on field. |
| counterVec |
|
No comment on field. |
| histogramVec |
|
No comment on field. |
| counterMap |
|
No comment on field. |
| histogramMap |
|
No comment on field. |
| aggregatedObservations |
|
No comment on field. |
PrometheusPump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| conf |
|
No comment on field. |
| TotalStatusMetrics |
|
Per service |
| PathStatusMetrics |
|
No comment on field. |
| KeyStatusMetrics |
|
No comment on field. |
| OauthStatusMetrics |
|
No comment on field. |
| TotalLatencyMetrics |
|
No comment on field. |
| allMetrics |
|
No comment on field. |
|
No comment on field. |
Pump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| type |
|
No comment on field. |
ResurfacePump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| logger |
|
No comment on field. |
| config |
|
No comment on field. |
| data |
|
No comment on field. |
| wg |
|
No comment on field. |
| enabled |
|
No comment on field. |
|
No comment on field. |
ResurfacePumpConfig
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
No comment on field. |
| URL |
|
No comment on field. |
| Rules |
|
No comment on field. |
| Queue |
|
No comment on field. |
RetentionRule
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| EverySeconds |
|
Duration in seconds for how long data will be kept in the database. 0 means infinite. |
| ShardGroupDurationSeconds |
|
Shard duration measured in seconds. |
| Type |
|
Retention rule type. For example "expire" |
SQLAggregatePump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
|
No comment on field. | |
| SQLConf |
|
No comment on field. |
| db |
|
No comment on field. |
| dbType |
|
No comment on field. |
| dialect |
|
No comment on field. |
| backgroundIndexCreated |
|
this channel is used to signal that the background index creation has finished - this is used for testing |
SQLAggregatePumpConf
@PumpConf SQLAggregate
| Field name | Field type | Comment |
|---|---|---|
|
TYKCONFIGEXPAND |
|
| EnvPrefix |
|
The prefix for the environment variables that will be used to override the configuration.
Defaults to |
| TrackAllPaths |
|
Specifies if it should store aggregated data for all the endpoints. By default, |
| IgnoreTagPrefixList |
|
Specifies prefixes of tags that should be ignored. |
| ThresholdLenTagList |
|
No comment on field. |
| StoreAnalyticsPerMinute |
|
Determines if the aggregations should be made per minute instead of per hour. |
| IgnoreAggregationsList |
|
No comment on field. |
| OmitIndexCreation |
|
Set to true to disable the default tyk index creation. |
SQLConf
@PumpConf SQL
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
The prefix for the environment variables that will be used to override the configuration.
Defaults to |
| Type |
|
The only supported and tested types are |
| ConnectionString |
|
Specifies the connection string to the database. |
| Postgres |
|
Postgres configurations. |
| Mysql |
|
Mysql configurations. |
| TableSharding |
|
Specifies if all the analytics records are going to be stored in one table or in multiple
tables (one per day). By default, |
| LogLevel |
|
Specifies the SQL log verbosity. The possible values are: |
| BatchSize |
|
Specifies the amount of records that are going to be written each batch. Type int. By default, it writes 1000 records max per batch. |
SQLPump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
|
No comment on field. | |
| IsUptime |
|
No comment on field. |
| SQLConf |
|
No comment on field. |
| db |
|
No comment on field. |
| dbType |
|
No comment on field. |
| dialect |
|
No comment on field. |
| backgroundIndexCreated |
|
this channel is used to signal that the background index creation has finished - this is used for testing |
SQSConf
SQSConf represents the configuration structure for the Tyk Pump SQS (Simple Queue Service) pump.
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
EnvPrefix specifies the prefix for the environment variables that will be used to override the configuration.
Defaults to |
| QueueName |
|
QueueName specifies the name of the AWS Simple Queue Service (SQS) queue for message delivery. |
| AWSRegion |
|
AWSRegion sets the AWS region where the SQS queue is located. |
| AWSSecret |
|
AWSSecret is the AWS secret key used for authentication. |
| AWSKey |
|
AWSKey is the AWS access key ID used for authentication. |
| AWSToken |
|
AWSToken is the AWS session token used for authentication. This is only required when using temporary credentials. |
| AWSEndpoint |
|
AWSEndpoint is the custom endpoint URL for AWS SQS, if applicable. |
| AWSMessageGroupID |
|
AWSMessageGroupID specifies the message group ID for ordered processing within the SQS queue. |
| AWSMessageIDDeduplicationEnabled |
|
AWSMessageIDDeduplicationEnabled enables/disables message deduplication based on unique IDs. |
| AWSDelaySeconds |
|
AWSDelaySeconds configures the delay (in seconds) before messages become available for processing. |
| AWSSQSBatchLimit |
|
AWSSQSBatchLimit sets the maximum number of messages in a single batch when sending to the SQS queue. |
SQSPump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| SQSClient |
|
No comment on field. |
| SQSQueueURL |
|
No comment on field. |
| SQSConf |
|
No comment on field. |
| log |
|
No comment on field. |
|
No comment on field. |
SQSSendMessageBatchAPI
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| type |
|
No comment on field. |
SegmentConf
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
No comment on field. |
| WriteKey |
|
No comment on field. |
SegmentPump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| segmentClient |
|
No comment on field. |
| segmentConf |
|
No comment on field. |
|
No comment on field. |
SplunkClient
SplunkClient contains Splunk client methods.
| Field name | Field type | Comment |
|---|---|---|
| Token |
|
No comment on field. |
| CollectorURL |
|
No comment on field. |
| TLSSkipVerify |
|
No comment on field. |
| httpClient |
|
No comment on field. |
| retry |
|
No comment on field. |
SplunkPump
SplunkPump is a Tyk Pump driver for Splunk.
| Field name | Field type | Comment |
|---|---|---|
| client |
|
No comment on field. |
| config |
|
No comment on field. |
|
No comment on field. |
SplunkPumpConfig
SplunkPumpConfig contains the driver configuration parameters. @PumpConf Splunk
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
The prefix for the environment variables that will be used to override the configuration.
Defaults to |
| CollectorToken |
|
Address of the datadog agent including host & port. |
| CollectorURL |
|
Endpoint the Pump will send analytics too. Should look something like:
|
| SSLInsecureSkipVerify |
|
Controls whether the pump client verifies the Splunk server's certificate chain and host name. |
| SSLCertFile |
|
SSL cert file location. |
| SSLKeyFile |
|
SSL cert key location. |
| SSLServerName |
|
SSL Server name used in the TLS connection. |
| ObfuscateAPIKeys |
|
Controls whether the pump client should hide the API key. In case you still need substring
of the value, check the next option. Default value is |
| ObfuscateAPIKeysLength |
|
Define the number of the characters from the end of the API key. The |
| Fields |
|
Define which Analytics fields should participate in the Splunk event. Check the available
fields in the example below. Default value is |
| IgnoreTagPrefixList |
|
Choose which tags to be ignored by the Splunk Pump. Keep in mind that the tag name and value
are hyphenated. Default value is |
| EnableBatch |
|
If this is set to |
| BatchMaxContentLength |
|
Max content length in bytes to be sent in batch requests. It should match the
|
| MaxRetries |
|
MaxRetries represents the maximum amount of retries to attempt if failed to send requests to splunk HEC.
Default value is |
StatsdConf
@PumpConf Statsd
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
The prefix for the environment variables that will be used to override the configuration.
Defaults to |
| Address |
|
Address of statsd including host & port. |
| Fields |
|
Define which Analytics fields should have its own metric calculation. |
| Tags |
|
List of tags to be added to the metric. |
| SeparatedMethod |
|
Allows to have a separated method field instead of having it embedded in the path field. |
StatsdPump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| dbConf |
|
No comment on field. |
|
No comment on field. |
StdOutConf
@PumpConf StdOut
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
The prefix for the environment variables that will be used to override the configuration.
Defaults to |
| Format |
|
Format of the analytics logs. Default is |
| LogFieldName |
|
Root name of the JSON object the analytics record is nested in. |
StdOutPump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
|
No comment on field. | |
| conf |
|
No comment on field. |
SyslogConf
@PumpConf Syslog
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
The prefix for the environment variables that will be used to override the configuration.
Defaults to |
| Transport |
|
Possible values are |
| NetworkAddr |
|
Host & Port combination of your syslog daemon ie: |
| LogLevel |
|
The severity level, an integer from 0-7, based off the Standard: Syslog Severity Levels. |
| Tag |
|
Prefix tag When working with FluentD, you should provide a FluentD Parser based on the OS you are using so that FluentD can correctly read the logs.
|
SyslogPump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| syslogConf |
|
No comment on field. |
| writer |
|
No comment on field. |
| filters |
|
No comment on field. |
| timeout |
|
No comment on field. |
|
No comment on field. |
TimestreamPump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| client |
|
No comment on field. |
| config |
|
No comment on field. |
|
No comment on field. |
TimestreamPumpConf
@PumpConf Timestream
| Field name | Field type | Comment |
|---|---|---|
| EnvPrefix |
|
The prefix for the environment variables that will be used to override the configuration.
Defaults to |
| AWSRegion |
|
The aws region that contains the timestream database |
| TableName |
|
The table name where the data is going to be written |
| DatabaseName |
|
The timestream database name that contains the table being written to |
| Dimensions |
|
A filter of all the dimensions that will be written to the table. The possible options are ["Method","Host","Path","RawPath","APIKey","APIVersion","APIName","APIID","OrgID","OauthID"] |
| Measures |
|
A filter of all the measures that will be written to the table. The possible options are ["ContentLength","ResponseCode","RequestTime","NetworkStats.OpenConnections", "NetworkStats.ClosedConnection","NetworkStats.BytesIn","NetworkStats.BytesOut", "Latency.Total","Latency.Upstream","GeoData.City.GeoNameID","IPAddress", "GeoData.Location.Latitude","GeoData.Location.Longitude","UserAgent","RawRequest","RawResponse", "RateLimit.Limit","Ratelimit.Remaining","Ratelimit.Reset", "GeoData.Country.ISOCode","GeoData.City.Names","GeoData.Location.TimeZone"] |
| WriteRateLimit |
|
Set to true in order to save any of the |
| ReadGeoFromRequest |
|
If set true, we will try to read geo information from the headers if
values aren't found on the analytic record . Default value is |
| WriteZeroValues |
|
Set to true, in order to save numerical values with value zero. Default value is |
| NameMappings |
|
A name mapping for both Dimensions and Measures names. It's not required |
TimestreamWriteRecordsAPI
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| type |
|
No comment on field. |
UptimePump
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| type |
|
No comment on field. |
counterStruct
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| labelValues |
|
No comment on field. |
| count |
|
No comment on field. |
dbObject
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| tableName |
|
No comment on field. |
histogramCounter
histogramCounter is a helper struct to mantain the totalRequestTime and hits in memory
| Field name | Field type | Comment |
|---|---|---|
| totalRequestTime |
|
No comment on field. |
| hits |
|
No comment on field. |
| labelValues |
|
No comment on field. |
rawDecoded
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| headers |
|
No comment on field. |
| body |
|
No comment on field. |
Functions
func Dialect
func GetPumpByName
func LoadHeadersFromRawRequest
func LoadHeadersFromRawResponse
func Min
func NewLogzioClient
func NewLogzioPumpConfig
func NewSplunkClient
NewSplunkClient initializes a new SplunkClient.
Uses: errors.New, http.DefaultClient, http.ProxyFromEnvironment, http.Transport, tls.Certificate, tls.Config, tls.LoadX509KeyPair, url.Parse.func (*ApiKeyTransport) RoundTrip
RoundTrip for ApiKeyTransport auth
Uses: base64.StdEncoding, http.DefaultTransport.func (*BaseMongoConf) GetBlurredURL
func (*CSVPump) GetEnvPrefix
func (*CSVPump) GetName
func (*CSVPump) Init
func (*CSVPump) New
func (*CSVPump) WriteData
func (*CommonPumpConfig) GetDecodedRequest
func (*CommonPumpConfig) GetDecodedResponse
func (*CommonPumpConfig) GetEnvPrefix
func (*CommonPumpConfig) GetFilters
func (*CommonPumpConfig) GetIgnoreFields
func (*CommonPumpConfig) GetMaxRecordSize
func (*CommonPumpConfig) GetOmitDetailedRecording
func (*CommonPumpConfig) GetTimeout
func (*CommonPumpConfig) SetDecodingRequest
func (*CommonPumpConfig) SetDecodingResponse
func (*CommonPumpConfig) SetFilters
func (*CommonPumpConfig) SetIgnoreFields
func (*CommonPumpConfig) SetLogLevel
func (*CommonPumpConfig) SetMaxRecordSize
func (*CommonPumpConfig) SetOmitDetailedRecording
func (*CommonPumpConfig) SetTimeout
func (*CommonPumpConfig) Shutdown
func (*CustomMetrics) Set
func (*DogStatsdPump) GetEnvPrefix
func (*DogStatsdPump) GetName
func (*DogStatsdPump) Init
func (*DogStatsdPump) New
func (*DogStatsdPump) Shutdown
func (*DogStatsdPump) WriteData
func (*DummyPump) GetName
func (*DummyPump) Init
func (*DummyPump) New
func (*DummyPump) WriteData
func (*ElasticsearchPump) GetEnvPrefix
func (*ElasticsearchPump) GetName
func (*ElasticsearchPump) GetTLSConfig
GetTLSConfig sets the TLS config for the pump
Uses: errors.New, tls.Certificate, tls.Config, tls.LoadX509KeyPair.func (*ElasticsearchPump) Init
func (*ElasticsearchPump) New
func (*ElasticsearchPump) Shutdown
func (*ElasticsearchPump) WriteData
func (*GraphMongoPump) GetEnvPrefix
func (*GraphMongoPump) GetName
func (*GraphMongoPump) Init
func (*GraphMongoPump) New
func (*GraphMongoPump) SetDecodingRequest
func (*GraphMongoPump) SetDecodingResponse
func (*GraphMongoPump) WriteData
func (*GraphSQLAggregatePump) DoAggregatedWriting
func (*GraphSQLAggregatePump) GetEnvPrefix
func (*GraphSQLAggregatePump) GetName
func (*GraphSQLAggregatePump) Init
func (*GraphSQLAggregatePump) New
func (*GraphSQLAggregatePump) WriteData
func (*GraphSQLPump) GetEnvPrefix
func (*GraphSQLPump) GetName
func (*GraphSQLPump) Init
func (*GraphSQLPump) New
func (*GraphSQLPump) SetLogLevel
func (*GraphSQLPump) WriteData
func (*GraylogPump) GetEnvPrefix
func (*GraylogPump) GetName
func (*GraylogPump) Init
func (*GraylogPump) New
func (*GraylogPump) WriteData
func (*HybridPump) GetName
func (*HybridPump) Init
func (*HybridPump) New
func (*HybridPump) RPCLogin
func (*HybridPump) Shutdown
func (*HybridPump) WriteData
func (*HybridPumpConf) CheckDefaults
func (*Influx2Pump) GetEnvPrefix
func (*Influx2Pump) GetName
func (*Influx2Pump) Init
func (*Influx2Pump) New
func (*Influx2Pump) Shutdown
func (*Influx2Pump) WriteData
func (*InfluxPump) GetEnvPrefix
func (*InfluxPump) GetName
func (*InfluxPump) Init
func (*InfluxPump) New
func (*InfluxPump) WriteData
func (*KafkaPump) GetEnvPrefix
func (*KafkaPump) GetName
func (*KafkaPump) Init
func (*KafkaPump) New
func (*KafkaPump) WriteData
func (*KinesisPump) GetEnvPrefix
func (*KinesisPump) GetName
GetName returns the name of the pump.
func (*KinesisPump) Init
Init initializes the pump with configuration settings.
Uses: awsconfig.LoadDefaultConfig, awsconfig.WithRegion, context.TODO, kinesis.NewFromConfig, mapstructure.Decode.func (*KinesisPump) New
func (*KinesisPump) WriteData
WriteData writes the analytics records to AWS Kinesis in batches.
Uses: analytics.AnalyticsRecord, aws.String, aws.ToString, big.NewInt, fmt.Sprint, json.Marshal, kinesis.PutRecordsInput, rand.Int, rand.Reader, types.PutRecordsRequestEntry.func (*LogzioPump) GetEnvPrefix
func (*LogzioPump) GetName
func (*LogzioPump) Init
func (*LogzioPump) New
func (*LogzioPump) WriteData
func (*MoesifPump) GetEnvPrefix
func (*MoesifPump) GetName
func (*MoesifPump) GetTimeout
func (*MoesifPump) Init
func (*MoesifPump) New
func (*MoesifPump) SetTimeout
func (*MoesifPump) Shutdown
func (*MoesifPump) WriteData
func (*MongoAggregatePump) DoAggregatedWriting
func (*MongoAggregatePump) GetCollectionName
func (*MongoAggregatePump) GetEnvPrefix
func (*MongoAggregatePump) GetName
func (*MongoAggregatePump) Init
func (*MongoAggregatePump) New
func (*MongoAggregatePump) SetAggregationTime
SetAggregationTime sets the aggregation time for the pump
func (*MongoAggregatePump) SetDecodingRequest
func (*MongoAggregatePump) SetDecodingResponse
func (*MongoAggregatePump) ShouldSelfHeal
ShouldSelfHeal returns true if the pump should self heal
Uses: analytics.SetlastTimestampAgggregateRecord, strings.Contains, time.Time.func (*MongoAggregatePump) WriteData
func (*MongoAggregatePump) WriteUptimeData
WriteUptimeData will pull the data from the in-memory store and drop it into the specified MongoDB collection
func (*MongoPump) AccumulateSet
AccumulateSet groups data items into chunks based on the max batch size limit while handling graph analytics records separately. It returns a 2D array of DBObjects.
Uses: model.DBObject.func (*MongoPump) GetEnvPrefix
func (*MongoPump) GetName
func (*MongoPump) Init
func (*MongoPump) New
func (*MongoPump) SetDecodingRequest
func (*MongoPump) SetDecodingResponse
func (*MongoPump) WriteData
func (*MongoPump) WriteUptimeData
WriteUptimeData will pull the data from the in-memory store and drop it into the specified MongoDB collection
Uses: analytics.UptimeReportData, context.Background, model.DBObject.func (*MongoSelectivePump) AccumulateSet
AccumulateSet organizes analytics data into a set of chunks based on their size.
Uses: model.DBObject.func (*MongoSelectivePump) GetCollectionName
func (*MongoSelectivePump) GetEnvPrefix
func (*MongoSelectivePump) GetName
func (*MongoSelectivePump) Init
func (*MongoSelectivePump) New
func (*MongoSelectivePump) SetDecodingRequest
func (*MongoSelectivePump) SetDecodingResponse
func (*MongoSelectivePump) WriteData
func (*MongoSelectivePump) WriteUptimeData
WriteUptimeData will pull the data from the in-memory store and drop it into the specified MongoDB collection
Uses: analytics.UptimeReportData, analytics.UptimeSQLTable, context.Background, model.DBObject.func (*PrometheusMetric) Expose
Expose executes prometheus library functions using the counter/histogram vector from the PrometheusMetric struct. If the PrometheusMetric is counterType, it will execute prometheus client Add function to add the counters from counterMap to the labels value metric If the PrometheusMetric is histogramType and aggregate_observations config is true, it will calculate the average value of the metrics in the histogramMap and execute prometheus Observe. If aggregate_observations is false, it won't do anything since it means that we already exposed the metric.
Uses: errors.New.func (*PrometheusMetric) GetLabelsValues
GetLabelsValues return a list of string values based on the custom metric labels.
Uses: fmt.Sprint.func (*PrometheusMetric) Inc
Inc is going to fill counterMap and histogramMap with the data from record.
Uses: errors.New, strings.Join.func (*PrometheusMetric) InitVec
InitVec inits the prometheus metric based on the metric_type. It only can create counter and histogram, if the metric_type is anything else it returns an error
Uses: errors.New, prometheus.CounterOpts, prometheus.HistogramOpts, prometheus.MustRegister, prometheus.NewCounterVec, prometheus.NewHistogramVec.func (*PrometheusMetric) Observe
Observe will fill hitogramMap with the sum of totalRequest and hits per label value if aggregate_observations is true. If aggregate_observations is set to false (default) it will execute prometheus Observe directly.
Uses: errors.New, strings.Join.func (*PrometheusPump) CreateBasicMetrics
CreateBasicMetrics stores all the predefined pump metrics in allMetrics slice
func (*PrometheusPump) GetEnvPrefix
func (*PrometheusPump) GetName
func (*PrometheusPump) Init
func (*PrometheusPump) InitCustomMetrics
InitCustomMetrics initialise custom prometheus metrics based on p.conf.CustomMetrics and add them into p.allMetrics
func (*PrometheusPump) New
func (*PrometheusPump) WriteData
func (*ResurfacePump) Flush
func (*ResurfacePump) GetEnvPrefix
func (*ResurfacePump) GetName
func (*ResurfacePump) Init
func (*ResurfacePump) New
func (*ResurfacePump) Shutdown
func (*ResurfacePump) WriteData
func (*SQLAggregatePump) DoAggregatedWriting
func (*SQLAggregatePump) GetEnvPrefix
func (*SQLAggregatePump) GetName
func (*SQLAggregatePump) Init
func (*SQLAggregatePump) New
func (*SQLAggregatePump) SetDecodingRequest
func (*SQLAggregatePump) SetDecodingResponse
func (*SQLAggregatePump) WriteData
WriteData aggregates and writes the passed data to SQL database. When table sharding is enabled, startIndex and endIndex are found by checking timestamp of the records. The main for loop iterates and finds the index where a new day starts. Then, the data is passed to AggregateData function and written to database day by day on different tables. However, if table sharding is not enabled, the for loop iterates one time and all data is passed at once to the AggregateData function and written to database on single table.
Uses: analytics.AggregateData, analytics.AggregateSQLTable, analytics.AnalyticsRecord.func (*SQLPump) GetEnvPrefix
func (*SQLPump) GetName
func (*SQLPump) Init
func (*SQLPump) New
func (*SQLPump) SetDecodingRequest
func (*SQLPump) SetDecodingResponse
func (*SQLPump) WriteData
func (*SQLPump) WriteUptimeData
func (*SQSPump) GetEnvPrefix
func (*SQSPump) GetName
func (*SQSPump) Init
func (*SQSPump) New
func (*SQSPump) NewSQSPublisher
func (*SQSPump) WriteData
func (*SegmentPump) GetEnvPrefix
func (*SegmentPump) GetName
func (*SegmentPump) Init
func (*SegmentPump) New
func (*SegmentPump) ToJSONMap
func (*SegmentPump) WriteData
func (*SegmentPump) WriteDataRecord
func (*SplunkPump) FilterTags
Filters the tags based on config rule
Uses: strings.HasPrefix.func (*SplunkPump) GetEnvPrefix
func (*SplunkPump) GetName
GetName returns the pump name.
func (*SplunkPump) Init
Init performs the initialization of the SplunkClient.
Uses: mapstructure.Decode, retry.NewBackoffRetry.func (*SplunkPump) New
New initializes a new pump.
func (*SplunkPump) WriteData
WriteData prepares an appropriate data structure and sends it to the HTTP Event Collector.
Uses: analytics.AnalyticsRecord, bytes.Buffer, json.Marshal.func (*StatsdPump) GetEnvPrefix
func (*StatsdPump) GetName
func (*StatsdPump) Init
func (*StatsdPump) New
func (*StatsdPump) WriteData
func (*StdOutPump) GetEnvPrefix
func (*StdOutPump) GetName
func (*StdOutPump) Init
func (*StdOutPump) New
func (*StdOutPump) WriteData
** Write the actual Data to Stdout Here
Uses: analytics.AnalyticsRecord, fmt.Print, logrus.InfoLevel, logrus.JSONFormatter, time.Now.func (*SyslogPump) GetEnvPrefix
func (*SyslogPump) GetFilters
func (*SyslogPump) GetName
func (*SyslogPump) GetTimeout
func (*SyslogPump) Init
func (*SyslogPump) New
func (*SyslogPump) SetFilters
func (*SyslogPump) SetTimeout
func (*SyslogPump) WriteData
** Write the actual Data to Syslog Here
Uses: analytics.AnalyticsRecord, fmt.Fprintf.func (*TimestreamPump) BuildTimestreamInputIterator
func (*TimestreamPump) GetAnalyticsRecordDimensions
func (*TimestreamPump) GetAnalyticsRecordMeasures
func (*TimestreamPump) GetEnvPrefix
func (*TimestreamPump) GetName
func (*TimestreamPump) Init
func (*TimestreamPump) MapAnalyticRecord2TimestreamMultimeasureRecord
func (*TimestreamPump) New
func (*TimestreamPump) NewTimestreamWriter
func (*TimestreamPump) WriteData
func (dbObject) GetObjectID
GetObjectID is a dummy function to satisfy the interface
func (dbObject) SetObjectID
SetObjectID is a dummy function to satisfy the interface
func (dbObject) TableName
Private functions
func buildURI
func chunkString
func contains
func createDBObject
func decodeHeaders
func decodeRawData
func fetchIDFromHeader
func fetchTokenPayload
func getDialFn
func getIndexName
func getListOfCommonPrefix
func getMapping
func getMongoDriverType
func init
func mapRawData
func mapToVarChar
func maskData
func maskRawBody
func parseAuthorizationHeader
func parseHeaders
func parsePrivateKey
func printPurgedBulkRecords
printPurgedBulkRecords print the purged records = bulk size when bulk is enabled
func processPumpEnvVars
func splitIntoBatches
splitIntoBatches splits the records into batches of the specified size.
func toLowerCase
func connect
func connect
func getOperator
func getGraphRecords
func connect
func callRPCFn
func connectAndLogin
connectAndLogin connects to RPC server and logs in if retry is true, it will retry with retryAndLog func
func connectRPC
func onConnectFunc
func startDispatcher
func connect
func createBucket
func connect
func write
func getSamplingPercentage
func parseConfiguration
func collectionExists
collectionExists checks to see if a collection name exists in the db.
References: context.Background.func connect
func divideAggregationTime
divideAggregationTime divides by two the analytics stored per minute setting
func doHash
func ensureIndexes
func getLastDocumentTimestamp
getLastDocumentTimestamp will return the timestamp of the last document in the collection
References: analytics.AgggregateMixedCollectionName, context.Background, errors.New, model.DBM, time.Time.func printAlert
func accumulate
accumulate processes the given item and updates the accumulator total, result set, and return array. It manages chunking the data into separate sets based on the max batch size limit, and appends the last item when necessary.
References: model.DBObject.func capCollection
func collectionExists
collectionExists checks to see if a collection name exists in the db.
References: context.Background.func connect
func ensureIndexes
func getItemSizeBytes
getItemSizeBytes calculates the size of the item in bytes, including an additional 1 KB for metadata.
func handleLargeDocuments
handleLargeDocuments checks if the item size exceeds the max document size limit and modifies the item if necessary.
References: base64.StdEncoding.func shouldProcessItem
shouldProcessItem checks if the item should be processed based on its ResponseCode and if it's a graph record. It returns the processed item and a boolean indicating if the item should be skipped.
References: analytics.AnalyticsRecord.func accumulate
accumulate processes the given item and updates the accumulator total, result set, and return array. It manages chunking the data into separate sets based on the max batch size limit, and appends the last item when necessary.
References: model.DBObject.func collectionExists
collectionExists checks to see if a collection name exists in the db.
References: context.Background.func connect
func ensureIndexes
func getItemSizeBytes
getItemSizeBytes calculates the size of the analytics item in bytes and checks if it's within the allowed limit.
func processItem
processItem checks if the item should be skipped or processed.
References: analytics.AnalyticsRecord.func ensureLabels
EnsureLabels ensure the data validity and consistency of the metric labels
func obfuscateAPIKey
func initBaseMetrics
func disable
func enable
func initWorker
func writeData
func ensureIndex
ensureIndex creates the new optimized index for tyk_aggregated. it uses CONCURRENTLY to avoid locking the table for a long time - postgresql.org/docs/current/sql-createindex.html#SQL-CREATEINDEX-CONCURRENTLY if background is true, it will run the index creation in a goroutine if not, it will block until it finishes
References: fmt.Sprintf.func ensureTable
ensureTable creates the table if it doesn't exist
References: analytics.SQLAnalyticsRecordAggregate.func buildIndexName
func createIndex
func ensureIndex
ensureIndex check that all indexes for the analytics SQL table are in place
References: errors.New, logrus.Fields, sync.WaitGroup.func ensureTable
ensureTable creates the table if it doesn't exist
References: analytics.AnalyticsRecord.func write
func send
func connect
func getMappings
func initConfigs
Set default values if they are not explicitly given And perform validation
func initWriter
func nameMap
func flushRecords
func processData
func flushRecords
func processData
func flushRecords
func processData
func flushRecords
func processData
func getAverageRequestTime
getAverageRequestTime returns the average request time of an histogramCounter dividing the sum of all the RequestTimes by the hits.
Tests
Files: 22. Third party imports: 12. Imports from organisation: 3. Tests: 100. Benchmarks: 0.
Constants
Types
Conn
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| Store |
|
No comment on field. |
Doc
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| ID |
|
No comment on field. |
| Foo |
|
No comment on field. |
MockSQSSendMessageBatchAPI
MockSQSSendMessageBatchAPI is a mock implementation of SQSSendMessageBatchAPI for testing purposes.
| Field name | Field type | Comment |
|---|---|---|
| GetQueueUrlFunc |
|
No comment on field. |
| SendMessageBatchFunc |
|
No comment on field. |
dummyObject
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| tableName |
|
No comment on field. |
splunkStatus
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| Text |
|
No comment on field. |
| Code |
|
No comment on field. |
| Len |
|
No comment on field. |
testHandler
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| test |
|
No comment on field. |
| batched |
|
No comment on field. |
| returnErrors |
|
No comment on field. |
| responses |
|
No comment on field. |
| reqCount |
|
No comment on field. |
testListener
This type doesn't have documentation.
| Field name | Field type | Comment |
|---|---|---|
| L |
|
No comment on field. |