Go API Documentation

github.com/TykTechnologies/tyk-pump/pumps

No package summary is available.

Package

Files: 32. Third party imports: 46. Imports from organisation: 4. Tests: 0. Benchmarks: 0.

Constants

Vars

Types

ApiKeyTransport

This type doesn't have documentation.

Field name Field type Comment
APIKey

string

No comment on field.
APIKeyID

string

No comment on field.

BaseMongoConf

This type doesn't have documentation.

Field name Field type Comment
EnvPrefix

string

Prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_MONGO_META for Mongo Pump TYK_PMP_PUMPS_UPTIME_META for Uptime Pump TYK_PMP_PUMPS_MONGOAGGREGATE_META for Mongo Aggregate Pump TYK_PMP_PUMPS_MONGOSELECTIVE_META for Mongo Selective Pump TYK_PMP_PUMPS_MONGOGRAPH_META for Mongo Graph Pump.

MongoURL

string

The full URL to your MongoDB instance, this can be a clustered instance if necessary and should include the database and username / password data.

MongoUseSSL

bool

Set to true to enable Mongo SSL connection.

MongoSSLInsecureSkipVerify

bool

Allows the use of self-signed certificates when connecting to an encrypted MongoDB database.

MongoSSLAllowInvalidHostnames

bool

Ignore hostname check when it differs from the original (for example with SSH tunneling). The rest of the TLS verification will still be performed.

MongoSSLCAFile

string

Path to the PEM file with trusted root certificates

MongoSSLPEMKeyfile

string

Path to the PEM file which contains both client certificate and private key. This is required for Mutual TLS.

MongoDBType

MongoType

Specifies the mongo DB Type. If it's 0, it means that you are using standard mongo db. If it's 1 it means you are using AWS Document DB. If it's 2, it means you are using CosmosDB. Defaults to Standard mongo (0).

OmitIndexCreation

bool

Set to true to disable the default tyk index creation.

MongoSessionConsistency

string

Set the consistency mode for the session, it defaults to Strong. The valid values are: strong, monotonic, eventual.

MongoDriverType

string

MongoDriverType is the type of the driver (library) to use. The valid values are: “mongo-go” and “mgo”. Since v1.9, the default driver is "mongo-go". Check out this guide to learn about MongoDB drivers supported by Tyk Pump.

MongoDirectConnection

bool

MongoDirectConnection informs whether to establish connections only with the specified seed servers, or to obtain information for the whole cluster and establish connections with further servers too. If true, the client will only connect to the host provided in the ConnectionString and won't attempt to discover other hosts in the cluster. Useful when network restrictions prevent discovery, such as with SSH tunneling. Default is false.

CSVConf

@PumpConf CSV

Field name Field type Comment
EnvPrefix

string

The prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_CSV_META

CSVDir

string

The directory and the filename where the CSV data will be stored.

CSVPump

This type doesn't have documentation.

Field name Field type Comment
csvConf

*CSVConf

No comment on field.
wroteHeaders

bool

No comment on field.

CommonPumpConfig

No comment on field.

CommonPumpConfig

This type doesn't have documentation.

Field name Field type Comment
filters

analytics.AnalyticsFilters

No comment on field.
timeout

int

No comment on field.
maxRecordSize

int

No comment on field.
OmitDetailedRecording

bool

No comment on field.
log

*logrus.Entry

No comment on field.
ignoreFields

[]string

No comment on field.
decodeResponseBase64

bool

No comment on field.
decodeRequestBase64

bool

No comment on field.

CustomMetrics

This type doesn't have documentation.

Field name Field type Comment
type

[]PrometheusMetric

No comment on field.

DogStatsdConf

@PumpConf DogStatsd

Field name Field type Comment
EnvPrefix

string

The prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_DOGSTATSD_META

Namespace

string

Prefix for your metrics to datadog.

Address

string

Address of the datadog agent including host & port.

SampleRate

float64

Defaults to 1 which equates to 100% of requests. To sample at 50%, set to 0.5.

AsyncUDS

bool

Enable async UDS over UDP https://github.com/Datadog/datadog-go#unix-domain-sockets-client.

AsyncUDSWriteTimeout

int

Integer write timeout in seconds if async_uds: true.

Buffered

bool

Enable buffering of messages.

BufferedMaxMessages

int

Max messages in single datagram if buffered: true. Default 16.

Tags

[]string

List of tags to be added to the metric. The possible options are listed in the below example.

If no tag is specified the fallback behavior is to use the below tags:

  • path
  • method
  • response_code
  • api_version
  • api_name
  • api_id
  • org_id
  • tracked
  • oauth_id

Note that this configuration can generate significant charges due to the unbound nature of the path tag.

"dogstatsd": {
  "type": "dogstatsd",
  "meta": {
    "address": "localhost:8125",
    "namespace": "pump",
    "async_uds": true,
    "async_uds_write_timeout_seconds": 2,
    "buffered": true,
    "buffered_max_messages": 32,
    "sample_rate": 0.5,
    "tags": [
      "method",
      "response_code",
      "api_version",
      "api_name",
      "api_id",
      "org_id",
      "tracked",
      "path",
      "oauth_id"
    ]
  }
},

On startup, you should see the loaded configs when initializing the dogstatsd pump

[May 10 15:23:44]  INFO dogstatsd: initializing pump
[May 10 15:23:44]  INFO dogstatsd: namespace: pump.
[May 10 15:23:44]  INFO dogstatsd: sample_rate: 50%
[May 10 15:23:44]  INFO dogstatsd: buffered: true, max_messages: 32
[May 10 15:23:44]  INFO dogstatsd: async_uds: true, write_timeout: 2s

DogStatsdPump

This type doesn't have documentation.

Field name Field type Comment
conf

*DogStatsdConf

No comment on field.
client

*statsd.Client

No comment on field.

CommonPumpConfig

No comment on field.

DummyPump

This type doesn't have documentation.

Field name Field type Comment

CommonPumpConfig

No comment on field.

Elasticsearch3Operator

This type doesn't have documentation.

Field name Field type Comment
esClient

*elasticv3.Client

No comment on field.
bulkProcessor

*elasticv3.BulkProcessor

No comment on field.
log

*logrus.Entry

No comment on field.

Elasticsearch5Operator

This type doesn't have documentation.

Field name Field type Comment
esClient

*elasticv5.Client

No comment on field.
bulkProcessor

*elasticv5.BulkProcessor

No comment on field.
log

*logrus.Entry

No comment on field.

Elasticsearch6Operator

This type doesn't have documentation.

Field name Field type Comment
esClient

*elasticv6.Client

No comment on field.
bulkProcessor

*elasticv6.BulkProcessor

No comment on field.
log

*logrus.Entry

No comment on field.

Elasticsearch7Operator

This type doesn't have documentation.

Field name Field type Comment
esClient

*elasticv7.Client

No comment on field.
bulkProcessor

*elasticv7.BulkProcessor

No comment on field.
log

*logrus.Entry

No comment on field.

ElasticsearchBulkConfig

This type doesn't have documentation.

Field name Field type Comment
Workers

int

Number of workers. Defaults to 1.

FlushInterval

int

Specifies the time in seconds to flush the data and send it to ES. Default disabled.

BulkActions

int

Specifies the number of requests needed to flush the data and send it to ES. Defaults to 1000 requests. If it is needed, can be disabled with -1.

BulkSize

int

Specifies the size (in bytes) needed to flush the data and send it to ES. Defaults to 5MB. If it is needed, can be disabled with -1.

ElasticsearchConf

@PumpConf Elasticsearch

Field name Field type Comment
EnvPrefix

string

The prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_ELASTICSEARCH_META

IndexName

string

The name of the index that all the analytics data will be placed in. Defaults to "tyk_analytics".

ElasticsearchURL

string

If sniffing is disabled, the URL that all data will be sent to. Defaults to "http://localhost:9200".

EnableSniffing

bool

If sniffing is enabled, the "elasticsearch_url" will be used to make a request to get a list of all the nodes in the cluster, the returned addresses will then be used. Defaults to false.

DocumentType

string

The type of the document that is created in ES. Defaults to "tyk_analytics".

RollingIndex

bool

Appends the date to the end of the index name, so each days data is split into a different index name. E.g. tyk_analytics-2016.02.28. Defaults to false.

ExtendedStatistics

bool

If set to true will include the following additional fields: Raw Request, Raw Response and User Agent.

GenerateID

bool

When enabled, generate _id for outgoing records. This prevents duplicate records when retrying ES.

DecodeBase64

bool

Allows for the base64 bits to be decode before being passed to ES.

Version

string

Specifies the ES version. Use "3" for ES 3.X, "5" for ES 5.X, "6" for ES 6.X, "7" for ES 7.X . Defaults to "3".

DisableBulk

bool

Disable batch writing. Defaults to false.

BulkConfig

ElasticsearchBulkConfig

Batch writing trigger configuration. Each option is an OR with eachother:

AuthAPIKeyID

string

API Key ID used for APIKey auth in ES. It's send to ES in the Authorization header as ApiKey base64(auth_api_key_id:auth_api_key)

AuthAPIKey

string

API Key used for APIKey auth in ES. It's send to ES in the Authorization header as ApiKey base64(auth_api_key_id:auth_api_key)

Username

string

Basic auth username. It's send to ES in the Authorization header as username:password encoded in base64.

Password

string

Basic auth password. It's send to ES in the Authorization header as username:password encoded in base64.

UseSSL

bool

Enables SSL connection.

SSLInsecureSkipVerify

bool

Controls whether the pump client verifies the Elastic Search server's certificate chain and hostname.

SSLCertFile

string

Can be used to set custom certificate file for authentication with Elastic Search.

SSLKeyFile

string

Can be used to set custom key file for authentication with Elastic Search.

ElasticsearchOperator

This type doesn't have documentation.

Field name Field type Comment
type

any

No comment on field.

ElasticsearchPump

This type doesn't have documentation.

Field name Field type Comment
operator

ElasticsearchOperator

No comment on field.
esConf

*ElasticsearchConf

No comment on field.

CommonPumpConfig

No comment on field.

GraphMongoPump

This type doesn't have documentation.

Field name Field type Comment

CommonPumpConfig

No comment on field.

MongoPump

No comment on field.

GraphSQLAggregatePump

This type doesn't have documentation.

Field name Field type Comment
SQLConf

*SQLAggregatePumpConf

No comment on field.
db

*gorm.DB

No comment on field.

CommonPumpConfig

No comment on field.

GraphSQLConf

This type doesn't have documentation.

Field name Field type Comment
TableName

string

TableName is a configuration field unique to the sql-graph pump, this field specifies the name of the sql table to be created/used for the pump in the cases of non-sharding in the case of sharding, it specifies the table prefix

SQLConf

No comment on field.

GraphSQLPump

This type doesn't have documentation.

Field name Field type Comment
db

*gorm.DB

No comment on field.
Conf

*GraphSQLConf

No comment on field.
tableName

string

No comment on field.

CommonPumpConfig

No comment on field.

GraylogConf

@PumpConf Graylog

Field name Field type Comment
EnvPrefix

string

The prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_GRAYLOG_META

GraylogHost

string

Graylog host.

GraylogPort

int

Graylog port.

Tags

[]string

List of tags to be added to the metric. The possible options are listed in the below example.

If no tag is specified the fallback behaviour is to don't send anything. The possible values are:

  • path
  • method
  • response_code
  • api_version
  • api_name
  • api_id
  • org_id
  • tracked
  • oauth_id
  • raw_request
  • raw_response
  • request_time
  • ip_address

GraylogPump

This type doesn't have documentation.

Field name Field type Comment
client

*gelf.Gelf

No comment on field.
conf

*GraylogConf

No comment on field.

CommonPumpConfig

No comment on field.

HybridPump

HybridPump allows to send analytics to MDCB over RPC

Field name Field type Comment

CommonPumpConfig

No comment on field.
clientSingleton

*gorpc.Client

No comment on field.
dispatcher

*gorpc.Dispatcher

No comment on field.
clientIsConnected

atomic.Value

No comment on field.
funcClientSingleton

*gorpc.DispatcherClient

No comment on field.
hybridConfig

*HybridPumpConf

No comment on field.

HybridPumpConf

@PumpConf Hybrid

Field name Field type Comment
EnvPrefix

string

The prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_HYBRID_META

ConnectionString

string

MDCB URL connection string

RPCKey

string

Your organization ID to connect to the MDCB installation.

APIKey

string

This the API key of a user used to authenticate and authorize the Hybrid Pump access through MDCB. The user should be a standard Dashboard user with minimal privileges so as to reduce any risk if the user is compromised.

IgnoreTagPrefixList

[]string

Specifies prefixes of tags that should be ignored if aggregated is set to true.

CallTimeout

int

Hybrid pump RPC calls timeout in seconds. Defaults to 10 seconds.

RPCPoolSize

int

Hybrid pump connection pool size. Defaults to 5.

aggregationTime

int

aggregationTime is to specify the frequency of the aggregation in minutes if aggregated is set to true.

Aggregated

bool

Send aggregated analytics data to Tyk MDCB

TrackAllPaths

bool

Specifies if it should store aggregated data for all the endpoints if aggregated is set to true. By default, false which means that only store aggregated data for tracked endpoints.

StoreAnalyticsPerMinute

bool

Determines if the aggregations should be made per minute (true) or per hour (false) if aggregated is set to true.

UseSSL

bool

Use SSL to connect to Tyk MDCB

SSLInsecureSkipVerify

bool

Skip SSL verification

Influx2Conf

@PumpConf Influx2

Field name Field type Comment
EnvPrefix

string

The prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_INFLUX2_META

BucketName

string

InfluxDB2 pump bucket name.

OrgName

string

InfluxDB2 pump organization name.

Addr

string

InfluxDB2 pump host.

Token

string

InfluxDB2 pump database token.

Fields

[]string

Define which Analytics fields should be sent to InfluxDB2. Check the available fields in the example below. Default value is ["method", "path", "response_code", "api_key", "time_stamp", "api_version", "api_name", "api_id", "org_id", "oauth_id", "raw_request", "request_time", "raw_response", "ip_address"].

Tags

[]string

List of tags to be added to the metric.

Flush

bool

Flush data to InfluxDB2 as soon as the pump receives it

CreateMissingBucket

bool

Create the bucket if it doesn't exist

NewBucketConfig

NewBucket

New bucket configuration

Influx2Pump

This type doesn't have documentation.

Field name Field type Comment
dbConf

*Influx2Conf

No comment on field.
client

influxdb2.Client

No comment on field.

CommonPumpConfig

No comment on field.

InfluxConf

@PumpConf Influx

Field name Field type Comment
EnvPrefix

string

The prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_INFLUX_META

DatabaseName

string

InfluxDB pump database name.

Addr

string

InfluxDB pump host.

Username

string

InfluxDB pump database username.

Password

string

InfluxDB pump database password.

Fields

[]string

Define which Analytics fields should be sent to InfluxDB. Check the available fields in the example below. Default value is ["method", "path", "response_code", "api_key", "time_stamp", "api_version", "api_name", "api_id", "org_id", "oauth_id", "raw_request", "request_time", "raw_response", "ip_address"].

Tags

[]string

List of tags to be added to the metric.

InfluxPump

This type doesn't have documentation.

Field name Field type Comment
dbConf

*InfluxConf

No comment on field.

CommonPumpConfig

No comment on field.

Json

This type doesn't have documentation.

Field name Field type Comment
type

map[string]any

No comment on field.

KafkaConf

@PumpConf Kafka

Field name Field type Comment
EnvPrefix

string

The prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_KAFKA_META

Broker

[]string

The list of brokers used to discover the partitions available on the kafka cluster. E.g. "localhost:9092".

ClientId

string

Unique identifier for client connections established with Kafka.

Topic

string

The topic that the writer will produce messages to.

Timeout

any

Timeout is the maximum amount of seconds to wait for a connect or write to complete.

Compressed

bool

Enable "github.com/golang/snappy" codec to be used to compress Kafka messages. By default is false.

MetaData

map[string]string

Can be used to set custom metadata inside the kafka message.

UseSSL

bool

Enables SSL connection.

SSLInsecureSkipVerify

bool

Controls whether the pump client verifies the kafka server's certificate chain and host name.

SSLCertFile

string

Can be used to set custom certificate file for authentication with kafka.

SSLKeyFile

string

Can be used to set custom key file for authentication with kafka.

SASLMechanism

string

SASL mechanism configuration. Only "plain" and "scram" are supported.

Username

string

SASL username.

Password

string

SASL password.

Algorithm

string

SASL algorithm. It's the algorithm specified for scram mechanism. It could be sha-512 or sha-256. Defaults to "sha-256".

KafkaPump

This type doesn't have documentation.

Field name Field type Comment
kafkaConf

*KafkaConf

No comment on field.
writerConfig

kafka.WriterConfig

No comment on field.
log

*logrus.Entry

No comment on field.

CommonPumpConfig

No comment on field.

KinesisConf

@PumpConf Kinesis

Field name Field type Comment
EnvPrefix

string

The prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_KINESIS_META

StreamName

string

A name to identify the stream. The stream name is scoped to the AWS account used by the application that creates the stream. It is also scoped by AWS Region. That is, two streams in two different AWS accounts can have the same name. Two streams in the same AWS account but in two different Regions can also have the same name.

Region

string

AWS Region the Kinesis stream targets

BatchSize

int

Each PutRecords (the function used in this pump)request can support up to 500 records. Each record in the request can be as large as 1 MiB, up to a limit of 5 MiB for the entire request, including partition keys. Each shard can support writes up to 1,000 records per second, up to a maximum data write total of 1 MiB per second.

KinesisPump

KinesisPump is a Tyk Pump that sends analytics records to AWS Kinesis.

Field name Field type Comment
client

*kinesis.Client

No comment on field.
kinesisConf

*KinesisConf

No comment on field.
log

*logrus.Entry

No comment on field.

CommonPumpConfig

No comment on field.

LogzioPump

This type doesn't have documentation.

Field name Field type Comment
sender

*lg.LogzioSender

No comment on field.
config

*LogzioPumpConfig

No comment on field.

CommonPumpConfig

No comment on field.

LogzioPumpConfig

@PumpConf Logzio

Field name Field type Comment
EnvPrefix

string

The prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_LOGZIO_META

CheckDiskSpace

bool

Set the sender to check if it crosses the maximum allowed disk usage. Default value is true.

DiskThreshold

int

Set disk queue threshold, once the threshold is crossed the sender will not enqueue the received logs. Default value is 98 (percentage of disk).

DrainDuration

string

Set drain duration (flush logs on disk). Default value is 3s.

QueueDir

string

The directory for the queue.

Token

string

Token for sending data to your logzio account.

URL

string

If you do not want to use the default Logzio url i.e. when using a proxy. Default is https://listener.logz.io:8071.

MoesifConf

@PumpConf Moesif

Field name Field type Comment
EnvPrefix

string

The prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_MOESIF_META

ApplicationID

string

Moesif Application Id. You can find your Moesif Application Id from Moesif Dashboard -> Top Right Menu -> API Keys . Moesif recommends creating separate Application Ids for each environment such as Production, Staging, and Development to keep data isolated.

RequestHeaderMasks

[]string

An option to mask a specific request header field.

ResponseHeaderMasks

[]string

An option to mask a specific response header field.

RequestBodyMasks

[]string

An option to mask a specific - request body field.

ResponseBodyMasks

[]string

An option to mask a specific response body field.

DisableCaptureRequestBody

bool

An option to disable logging of request body. Default value is false.

DisableCaptureResponseBody

bool

An option to disable logging of response body. Default value is false.

UserIDHeader

string

An optional field name to identify User from a request or response header.

CompanyIDHeader

string

An optional field name to identify Company (Account) from a request or response header.

EnableBulk

bool

Set this to true to enable bulk_config.

BulkConfig

map[string]any

Batch writing trigger configuration.

  • "event_queue_size" - (optional) An optional field name which specify the maximum number of events to hold in queue before sending to Moesif. In case of network issues when not able to connect/send event to Moesif, skips adding new events to the queue to prevent memory overflow. Type: int. Default value is 10000.
  • "batch_size" - (optional) An optional field name which specify the maximum batch size when sending to Moesif. Type: int. Default value is 200.
  • "timer_wake_up_seconds" - (optional) An optional field which specifies a time (every n seconds) how often background thread runs to send events to moesif. Type: int. Default value is 2 seconds.
AuthorizationHeaderName

string

An optional request header field name to used to identify the User in Moesif. Default value is authorization.

AuthorizationUserIdField

string

An optional field name use to parse the User from authorization header in Moesif. Default value is sub.

MoesifPump

This type doesn't have documentation.

Field name Field type Comment
moesifAPI

moesifapi.API

No comment on field.
moesifConf

*MoesifConf

No comment on field.
filters

analytics.AnalyticsFilters

No comment on field.
timeout

int

No comment on field.
samplingPercentage

int

No comment on field.
eTag

string

No comment on field.
lastUpdatedTime

time.Time

No comment on field.
appConfig

map[string]any

No comment on field.
userSampleRateMap

map[string]any

No comment on field.
companySampleRateMap

map[string]any

No comment on field.

CommonPumpConfig

No comment on field.

MongoAggregateConf

@PumpConf MongoAggregate

Field name Field type Comment

BaseMongoConf

TYKCONFIGEXPAND

UseMixedCollection

bool

If set to true your pump will store analytics to both your organisation defined collections z_tyk_analyticz_aggregate_{ORG ID} and your org-less tyk_analytics_aggregates collection. When set to 'false' your pump will only store analytics to your org defined collection.

TrackAllPaths

bool

Specifies if it should store aggregated data for all the endpoints. By default, false which means that only store aggregated data for tracked endpoints.

IgnoreTagPrefixList

[]string

Specifies prefixes of tags that should be ignored.

ThresholdLenTagList

int

Determines the threshold of amount of tags of an aggregation. If the amount of tags is superior to the threshold, it will print an alert. Defaults to 1000.

StoreAnalyticsPerMinute

bool

Determines if the aggregations should be made per minute (true) or per hour (false).

AggregationTime

int

Determines the amount of time the aggregations should be made (in minutes). It defaults to the max value is 60 and the minimum is 1. If StoreAnalyticsPerMinute is set to true, this field will be skipped.

EnableAggregateSelfHealing

bool

Determines if the self healing will be activated or not. Self Healing allows pump to handle Mongo document's max-size errors by creating a new document when the max-size is reached. It also divide by 2 the AggregationTime field to avoid the same error in the future.

IgnoreAggregationsList

[]string

This list determines which aggregations are going to be dropped and not stored in the collection. Posible values are: "APIID","errors","versions","apikeys","oauthids","geo","tags","endpoints","keyendpoints", "oauthendpoints", and "apiendpoints".

MongoAggregatePump

This type doesn't have documentation.

Field name Field type Comment
store

persistent.PersistentStorage

No comment on field.
dbConf

*MongoAggregateConf

No comment on field.

CommonPumpConfig

No comment on field.

MongoConf

@PumpConf Mongo

Field name Field type Comment

BaseMongoConf

TYKCONFIGEXPAND

CollectionName

string

Specifies the mongo collection name.

MaxInsertBatchSizeBytes

int

Maximum insert batch size for mongo selective pump. If the batch we are writing surpasses this value, it will be sent in multiple batches. Defaults to 10Mb.

MaxDocumentSizeBytes

int

Maximum document size. If the document exceed this value, it will be skipped. Defaults to 10Mb.

CollectionCapMaxSizeBytes

int

Amount of bytes of the capped collection in 64bits architectures. Defaults to 5GB.

CollectionCapEnable

bool

Enable collection capping. It's used to set a maximum size of the collection.

MongoPump

This type doesn't have documentation.

Field name Field type Comment
IsUptime

bool

No comment on field.
store

persistent.PersistentStorage

No comment on field.
dbConf

*MongoConf

No comment on field.

CommonPumpConfig

No comment on field.

MongoSelectiveConf

@PumpConf MongoSelective

Field name Field type Comment

BaseMongoConf

TYKCONFIGEXPAND

MaxInsertBatchSizeBytes

int

Maximum insert batch size for mongo selective pump. If the batch we are writing surpass this value, it will be send in multiple batchs. Defaults to 10Mb.

MaxDocumentSizeBytes

int

Maximum document size. If the document exceed this value, it will be skipped. Defaults to 10Mb.

MongoSelectivePump

This type doesn't have documentation.

Field name Field type Comment
store

persistent.PersistentStorage

No comment on field.
dbConf

*MongoSelectiveConf

No comment on field.

CommonPumpConfig

No comment on field.

MongoType

This type doesn't have documentation.

Field name Field type Comment
type

int

No comment on field.

MysqlConfig

This type doesn't have documentation.

Field name Field type Comment
DefaultStringSize

uint

Default size for string fields. Defaults to 256.

DisableDatetimePrecision

bool

Disable datetime precision, which not supported before MySQL 5.6.

DontSupportRenameIndex

bool

Drop & create when rename index, rename index not supported before MySQL 5.7, MariaDB.

DontSupportRenameColumn

bool

change when rename column, rename column not supported before MySQL 8, MariaDB.

SkipInitializeWithVersion

bool

Auto configure based on currently MySQL version.

NewBucket

Configuration required to create the Bucket if it doesn't already exist See https://docs.influxdata.com/influxdb/v2.1/api/#operation/PostBuckets

Field name Field type Comment
Description

string

A description visible on the InfluxDB2 UI

RetentionRules

[]RetentionRule

Rules to expire or retain data. No rules means data never expires.

PostgresConfig

This type doesn't have documentation.

Field name Field type Comment
PreferSimpleProtocol

bool

Disables implicit prepared statement usage.

PrometheusConf

@PumpConf Prometheus

Field name Field type Comment
EnvPrefix

string

Prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_PROMETHEUS_META

Addr

string

The full URL to your Prometheus instance, {HOST}:{PORT}. For example localhost:9090.

Path

string

The path to the Prometheus collection. For example /metrics.

AggregateObservations

bool

This will enable an experimental feature that will aggregate the histogram metrics request time values before exposing them to prometheus. Enabling this will reduce the CPU usage of your prometheus pump but you will loose histogram precision. Experimental.

DisabledMetrics

[]string

Metrics to exclude from exposition. Currently, excludes only the base metrics.

TrackAllPaths

bool

Specifies if it should expose aggregated metrics for all the endpoints. By default, false which means that all APIs endpoints will be counted as 'unknown' unless the API uses the track endpoint plugin.

CustomMetrics

CustomMetrics

Custom Prometheus metrics.

PrometheusMetric

This type doesn't have documentation.

Field name Field type Comment
Name

string

The name of the custom metric. For example: tyk_http_status_per_api_name

Help

string

Description text of the custom metric. For example: HTTP status codes per API

MetricType

string

Determines the type of the metric. There's currently 2 available options: counter or histogram. In case of histogram, you can only modify the labels since it always going to use the request_time.

Buckets

[]float64

Defines the buckets into which observations are counted. The type is float64 array and by default, [1, 2, 5, 7, 10, 15, 20, 25, 30, 40, 50, 60, 70, 80, 90, 100, 200, 300, 400, 500, 1000, 2000, 5000, 10000, 30000, 60000]

ObfuscateAPIKeys

bool

Controls whether the pump client should hide the API key. In case you still need substring of the value, check the next option. Default value is false.

ObfuscateAPIKeysLength

int

Define the number of the characters from the end of the API key. The obfuscate_api_keys should be set to true. Default value is 4.

Labels

[]string

Defines the partitions in the metrics. For example: ['response_code','api_name']. The available labels are: ["host","method", "path", "response_code", "api_key", "time_stamp", "api_version", "api_name", "api_id", "org_id", "oauth_id","request_time", "ip_address", "alias"].

enabled

bool

No comment on field.
counterVec

*prometheus.CounterVec

No comment on field.
histogramVec

*prometheus.HistogramVec

No comment on field.
counterMap

map[string]counterStruct

No comment on field.
histogramMap

map[string]histogramCounter

No comment on field.
aggregatedObservations

bool

No comment on field.

PrometheusPump

This type doesn't have documentation.

Field name Field type Comment
conf

*PrometheusConf

No comment on field.
TotalStatusMetrics

*prometheus.CounterVec

Per service

PathStatusMetrics

*prometheus.CounterVec

No comment on field.
KeyStatusMetrics

*prometheus.CounterVec

No comment on field.
OauthStatusMetrics

*prometheus.CounterVec

No comment on field.
TotalLatencyMetrics

*prometheus.HistogramVec

No comment on field.
allMetrics

[]*PrometheusMetric

No comment on field.

CommonPumpConfig

No comment on field.

Pump

This type doesn't have documentation.

Field name Field type Comment
type

any

No comment on field.

ResurfacePump

This type doesn't have documentation.

Field name Field type Comment
logger

*logger.HttpLogger

No comment on field.
config

*ResurfacePumpConfig

No comment on field.
data

chan []interface{}

No comment on field.
wg

sync.WaitGroup

No comment on field.
enabled

bool

No comment on field.

CommonPumpConfig

No comment on field.

ResurfacePumpConfig

This type doesn't have documentation.

Field name Field type Comment
EnvPrefix

string

No comment on field.
URL

string

No comment on field.
Rules

string

No comment on field.
Queue

[]string

No comment on field.

RetentionRule

This type doesn't have documentation.

Field name Field type Comment
EverySeconds

int64

Duration in seconds for how long data will be kept in the database. 0 means infinite.

ShardGroupDurationSeconds

int64

Shard duration measured in seconds.

Type

string

Retention rule type. For example "expire"

SQLAggregatePump

This type doesn't have documentation.

Field name Field type Comment

CommonPumpConfig

No comment on field.
SQLConf

*SQLAggregatePumpConf

No comment on field.
db

*gorm.DB

No comment on field.
dbType

string

No comment on field.
dialect

gorm.Dialector

No comment on field.
backgroundIndexCreated

chan bool

this channel is used to signal that the background index creation has finished - this is used for testing

SQLAggregatePumpConf

@PumpConf SQLAggregate

Field name Field type Comment

SQLConf

TYKCONFIGEXPAND

EnvPrefix

string

The prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_SQLAGGREGATE_META

TrackAllPaths

bool

Specifies if it should store aggregated data for all the endpoints. By default, false which means that only store aggregated data for tracked endpoints.

IgnoreTagPrefixList

[]string

Specifies prefixes of tags that should be ignored.

ThresholdLenTagList

int

No comment on field.
StoreAnalyticsPerMinute

bool

Determines if the aggregations should be made per minute instead of per hour.

IgnoreAggregationsList

[]string

No comment on field.
OmitIndexCreation

bool

Set to true to disable the default tyk index creation.

SQLConf

@PumpConf SQL

Field name Field type Comment
EnvPrefix

string

The prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_SQL_META

Type

string

The only supported and tested types are postgres and mysql. From v1.12.0, we no longer support sqlite as a storage type.

ConnectionString

string

Specifies the connection string to the database.

Postgres

PostgresConfig

Postgres configurations.

Mysql

MysqlConfig

Mysql configurations.

TableSharding

bool

Specifies if all the analytics records are going to be stored in one table or in multiple tables (one per day). By default, false. If false, all the records are going to be stored in tyk_aggregated table. Instead, if it's true, all the records of the day are going to be stored in tyk_aggregated_YYYYMMDD table, where YYYYMMDD is going to change depending on the date.

LogLevel

string

Specifies the SQL log verbosity. The possible values are: info,error and warning. By default, the value is silent, which means that it won't log any SQL query.

BatchSize

int

Specifies the amount of records that are going to be written each batch. Type int. By default, it writes 1000 records max per batch.

SQLPump

This type doesn't have documentation.

Field name Field type Comment

CommonPumpConfig

No comment on field.
IsUptime

bool

No comment on field.
SQLConf

*SQLConf

No comment on field.
db

*gorm.DB

No comment on field.
dbType

string

No comment on field.
dialect

gorm.Dialector

No comment on field.
backgroundIndexCreated

chan bool

this channel is used to signal that the background index creation has finished - this is used for testing

SQSConf

SQSConf represents the configuration structure for the Tyk Pump SQS (Simple Queue Service) pump.

Field name Field type Comment
EnvPrefix

string

EnvPrefix specifies the prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_SQS_META

QueueName

string

QueueName specifies the name of the AWS Simple Queue Service (SQS) queue for message delivery.

AWSRegion

string

AWSRegion sets the AWS region where the SQS queue is located.

AWSSecret

string

AWSSecret is the AWS secret key used for authentication.

AWSKey

string

AWSKey is the AWS access key ID used for authentication.

AWSToken

string

AWSToken is the AWS session token used for authentication. This is only required when using temporary credentials.

AWSEndpoint

string

AWSEndpoint is the custom endpoint URL for AWS SQS, if applicable.

AWSMessageGroupID

string

AWSMessageGroupID specifies the message group ID for ordered processing within the SQS queue.

AWSMessageIDDeduplicationEnabled

bool

AWSMessageIDDeduplicationEnabled enables/disables message deduplication based on unique IDs.

AWSDelaySeconds

int32

AWSDelaySeconds configures the delay (in seconds) before messages become available for processing.

AWSSQSBatchLimit

int

AWSSQSBatchLimit sets the maximum number of messages in a single batch when sending to the SQS queue.

SQSPump

This type doesn't have documentation.

Field name Field type Comment
SQSClient

SQSSendMessageBatchAPI

No comment on field.
SQSQueueURL

*string

No comment on field.
SQSConf

*SQSConf

No comment on field.
log

*logrus.Entry

No comment on field.

CommonPumpConfig

No comment on field.

SQSSendMessageBatchAPI

This type doesn't have documentation.

Field name Field type Comment
type

any

No comment on field.

SegmentConf

This type doesn't have documentation.

Field name Field type Comment
EnvPrefix

string

No comment on field.
WriteKey

string

No comment on field.

SegmentPump

This type doesn't have documentation.

Field name Field type Comment
segmentClient

*segment.Client

No comment on field.
segmentConf

*SegmentConf

No comment on field.

CommonPumpConfig

No comment on field.

SplunkClient

SplunkClient contains Splunk client methods.

Field name Field type Comment
Token

string

No comment on field.
CollectorURL

string

No comment on field.
TLSSkipVerify

bool

No comment on field.
httpClient

*http.Client

No comment on field.
retry

*retry.BackoffHTTPRetry

No comment on field.

SplunkPump

SplunkPump is a Tyk Pump driver for Splunk.

Field name Field type Comment
client

*SplunkClient

No comment on field.
config

*SplunkPumpConfig

No comment on field.

CommonPumpConfig

No comment on field.

SplunkPumpConfig

SplunkPumpConfig contains the driver configuration parameters. @PumpConf Splunk

Field name Field type Comment
EnvPrefix

string

The prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_SPLUNK_META

CollectorToken

string

Address of the datadog agent including host & port.

CollectorURL

string

Endpoint the Pump will send analytics too. Should look something like: https://splunk:8088/services/collector/event.

SSLInsecureSkipVerify

bool

Controls whether the pump client verifies the Splunk server's certificate chain and host name.

SSLCertFile

string

SSL cert file location.

SSLKeyFile

string

SSL cert key location.

SSLServerName

string

SSL Server name used in the TLS connection.

ObfuscateAPIKeys

bool

Controls whether the pump client should hide the API key. In case you still need substring of the value, check the next option. Default value is false.

ObfuscateAPIKeysLength

int

Define the number of the characters from the end of the API key. The obfuscate_api_keys should be set to true. Default value is 0.

Fields

[]string

Define which Analytics fields should participate in the Splunk event. Check the available fields in the example below. Default value is ["method", "path", "response_code", "api_key", "time_stamp", "api_version", "api_name", "api_id", "org_id", "oauth_id", "raw_request", "request_time", "raw_response", "ip_address"].

IgnoreTagPrefixList

[]string

Choose which tags to be ignored by the Splunk Pump. Keep in mind that the tag name and value are hyphenated. Default value is [].

EnableBatch

bool

If this is set to true, pump is going to send the analytics records in batch to Splunk. Default value is false.

BatchMaxContentLength

int

Max content length in bytes to be sent in batch requests. It should match the max_content_length configured in Splunk. If the purged analytics records size don't reach the amount of bytes, they're send anyways in each purge_loop. Default value is 838860800 (~ 800 MB), the same default value as Splunk config.

MaxRetries

uint64

MaxRetries represents the maximum amount of retries to attempt if failed to send requests to splunk HEC. Default value is 0

StatsdConf

@PumpConf Statsd

Field name Field type Comment
EnvPrefix

string

The prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_STATSD_META

Address

string

Address of statsd including host & port.

Fields

[]string

Define which Analytics fields should have its own metric calculation.

Tags

[]string

List of tags to be added to the metric.

SeparatedMethod

bool

Allows to have a separated method field instead of having it embedded in the path field.

StatsdPump

This type doesn't have documentation.

Field name Field type Comment
dbConf

*StatsdConf

No comment on field.

CommonPumpConfig

No comment on field.

StdOutConf

@PumpConf StdOut

Field name Field type Comment
EnvPrefix

string

The prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_STDOUT_META

Format

string

Format of the analytics logs. Default is text if json is not explicitly specified. When JSON logging is used all pump logs to stdout will be JSON.

LogFieldName

string

Root name of the JSON object the analytics record is nested in.

StdOutPump

This type doesn't have documentation.

Field name Field type Comment

CommonPumpConfig

No comment on field.
conf

*StdOutConf

No comment on field.

SyslogConf

@PumpConf Syslog

Field name Field type Comment
EnvPrefix

string

The prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_SYSLOG_META

Transport

string

Possible values are udp, tcp, tls in string form.

NetworkAddr

string

Host & Port combination of your syslog daemon ie: "localhost:5140".

LogLevel

int

The severity level, an integer from 0-7, based off the Standard: Syslog Severity Levels.

Tag

string

Prefix tag

When working with FluentD, you should provide a FluentD Parser based on the OS you are using so that FluentD can correctly read the logs.

"syslog": {
  "name": "syslog",
  "meta": {
    "transport": "udp",
    "network_addr": "localhost:5140",
    "log_level": 6,
    "tag": "syslog-pump"
  }

SyslogPump

This type doesn't have documentation.

Field name Field type Comment
syslogConf

*SyslogConf

No comment on field.
writer

*syslog.Writer

No comment on field.
filters

analytics.AnalyticsFilters

No comment on field.
timeout

int

No comment on field.

CommonPumpConfig

No comment on field.

TimestreamPump

This type doesn't have documentation.

Field name Field type Comment
client

TimestreamWriteRecordsAPI

No comment on field.
config

*TimestreamPumpConf

No comment on field.

CommonPumpConfig

No comment on field.

TimestreamPumpConf

@PumpConf Timestream

Field name Field type Comment
EnvPrefix

string

The prefix for the environment variables that will be used to override the configuration. Defaults to TYK_PMP_PUMPS_TIMESTREAM_META

AWSRegion

string

The aws region that contains the timestream database

TableName

string

The table name where the data is going to be written

DatabaseName

string

The timestream database name that contains the table being written to

Dimensions

[]string

A filter of all the dimensions that will be written to the table. The possible options are ["Method","Host","Path","RawPath","APIKey","APIVersion","APIName","APIID","OrgID","OauthID"]

Measures

[]string

A filter of all the measures that will be written to the table. The possible options are ["ContentLength","ResponseCode","RequestTime","NetworkStats.OpenConnections", "NetworkStats.ClosedConnection","NetworkStats.BytesIn","NetworkStats.BytesOut", "Latency.Total","Latency.Upstream","GeoData.City.GeoNameID","IPAddress", "GeoData.Location.Latitude","GeoData.Location.Longitude","UserAgent","RawRequest","RawResponse", "RateLimit.Limit","Ratelimit.Remaining","Ratelimit.Reset", "GeoData.Country.ISOCode","GeoData.City.Names","GeoData.Location.TimeZone"]

WriteRateLimit

bool

Set to true in order to save any of the RateLimit measures. Default value is false.

ReadGeoFromRequest

bool

If set true, we will try to read geo information from the headers if values aren't found on the analytic record . Default value is false.

WriteZeroValues

bool

Set to true, in order to save numerical values with value zero. Default value is false.

NameMappings

map[string]string

A name mapping for both Dimensions and Measures names. It's not required

TimestreamWriteRecordsAPI

This type doesn't have documentation.

Field name Field type Comment
type

any

No comment on field.

UptimePump

This type doesn't have documentation.

Field name Field type Comment
type

any

No comment on field.

counterStruct

This type doesn't have documentation.

Field name Field type Comment
labelValues

[]string

No comment on field.
count

uint64

No comment on field.

dbObject

This type doesn't have documentation.

Field name Field type Comment
tableName

string

No comment on field.

histogramCounter

histogramCounter is a helper struct to mantain the totalRequestTime and hits in memory

Field name Field type Comment
totalRequestTime

uint64

No comment on field.
hits

uint64

No comment on field.
labelValues

[]string

No comment on field.

rawDecoded

This type doesn't have documentation.

Field name Field type Comment
headers

map[string]any

No comment on field.
body

any

No comment on field.

Functions

func Dialect

Uses: errors.New, mysql.Config, mysql.New, postgres.Config, postgres.New.

func GetPumpByName

Uses: errors.New, strings.ToLower.

func LoadHeadersFromRawRequest

Uses: base64.StdEncoding, bufio.NewReader, bytes.NewReader, http.ReadRequest.

func LoadHeadersFromRawResponse

Uses: base64.StdEncoding, bufio.NewReader, bytes.NewReader, http.ReadResponse.

func Min

func NewLogzioClient

Uses: fmt.Errorf, lg.New, lg.SetCheckDiskSpace, lg.SetDebug, lg.SetDrainDiskThreshold, lg.SetDrainDuration, lg.SetTempDirectory, lg.SetUrl, os.Stderr, time.ParseDuration.

func NewLogzioPumpConfig

Uses: fmt.Sprintf, os.PathSeparator, os.TempDir, time.Now.

func NewSplunkClient

NewSplunkClient initializes a new SplunkClient.

Uses: errors.New, http.DefaultClient, http.ProxyFromEnvironment, http.Transport, tls.Certificate, tls.Config, tls.LoadX509KeyPair, url.Parse.

func (*ApiKeyTransport) RoundTrip

RoundTrip for ApiKeyTransport auth

Uses: base64.StdEncoding, http.DefaultTransport.

func (*BaseMongoConf) GetBlurredURL

Uses: regexp.MustCompile.

func (*CSVPump) GetEnvPrefix

func (*CSVPump) GetName

func (*CSVPump) Init

Uses: mapstructure.Decode, os.MkdirAll.

func (*CSVPump) New

func (*CSVPump) WriteData

Uses: analytics.AnalyticsRecord, csv.NewWriter, fmt.Errorf, fmt.Sprintf, os.Create, os.File, os.IsNotExist, os.O_APPEND, os.O_WRONLY, os.OpenFile, os.Stat, path.Join, time.Now.

func (*CommonPumpConfig) GetDecodedRequest

func (*CommonPumpConfig) GetDecodedResponse

func (*CommonPumpConfig) GetEnvPrefix

func (*CommonPumpConfig) GetFilters

func (*CommonPumpConfig) GetIgnoreFields

func (*CommonPumpConfig) GetMaxRecordSize

func (*CommonPumpConfig) GetOmitDetailedRecording

func (*CommonPumpConfig) GetTimeout

func (*CommonPumpConfig) SetDecodingRequest

func (*CommonPumpConfig) SetDecodingResponse

func (*CommonPumpConfig) SetFilters

func (*CommonPumpConfig) SetIgnoreFields

func (*CommonPumpConfig) SetLogLevel

func (*CommonPumpConfig) SetMaxRecordSize

func (*CommonPumpConfig) SetOmitDetailedRecording

func (*CommonPumpConfig) SetTimeout

func (*CommonPumpConfig) Shutdown

func (*CustomMetrics) Set

Uses: json.Unmarshal.

func (*DogStatsdPump) GetEnvPrefix

func (*DogStatsdPump) GetName

func (*DogStatsdPump) Init

Uses: errors.Wrap, mapstructure.Decode, statsd.Option, statsd.WithMaxMessagesPerPayload, statsd.WithWriteTimeoutUDS, time.Duration, time.Second.

func (*DogStatsdPump) New

func (*DogStatsdPump) Shutdown

func (*DogStatsdPump) WriteData

Uses: analytics.AnalyticsRecord, fmt.Errorf, fmt.Sprintf, strings.TrimRight.

func (*DummyPump) GetName

func (*DummyPump) Init

func (*DummyPump) New

func (*DummyPump) WriteData

func (*ElasticsearchPump) GetEnvPrefix

func (*ElasticsearchPump) GetName

func (*ElasticsearchPump) GetTLSConfig

GetTLSConfig sets the TLS config for the pump

Uses: errors.New, tls.Certificate, tls.Config, tls.LoadX509KeyPair.

func (*ElasticsearchPump) Init

Uses: errors.New, mapstructure.Decode, regexp.MustCompile.

func (*ElasticsearchPump) New

func (*ElasticsearchPump) Shutdown

func (*ElasticsearchPump) WriteData

func (*GraphMongoPump) GetEnvPrefix

func (*GraphMongoPump) GetName

func (*GraphMongoPump) Init

Uses: logrus.Fields, mapstructure.Decode.

func (*GraphMongoPump) New

func (*GraphMongoPump) SetDecodingRequest

func (*GraphMongoPump) SetDecodingResponse

func (*GraphMongoPump) WriteData

Uses: analytics.AnalyticsRecord, analytics.GraphRecord, context.Background, fmt.Errorf, logrus.Fields, model.DBObject, model.NewObjectID, strings.Contains, strings.ToLower.

func (*GraphSQLAggregatePump) DoAggregatedWriting

Uses: analytics.GraphSQLAnalyticsRecordAggregate, analytics.OnConflictAssignments, clause.Assignments, clause.Column, clause.OnConflict, fmt.Sprintf, hex.EncodeToString.

func (*GraphSQLAggregatePump) GetEnvPrefix

func (*GraphSQLAggregatePump) GetName

func (*GraphSQLAggregatePump) Init

Uses: analytics.AggregateGraphSQLTable, analytics.GraphSQLAnalyticsRecordAggregate, gorm.Config, gorm.Open, mapstructure.Decode.

func (*GraphSQLAggregatePump) New

func (*GraphSQLAggregatePump) WriteData

Uses: analytics.AggregateGraphData, analytics.AggregateGraphSQLTable, analytics.AnalyticsRecord, analytics.GraphSQLAnalyticsRecordAggregate.

func (*GraphSQLPump) GetEnvPrefix

func (*GraphSQLPump) GetName

func (*GraphSQLPump) Init

Uses: analytics.GraphRecord, analytics.GraphSQLTableName, fmt.Errorf, gorm.Config, gorm.Open, mapstructure.Decode.

func (*GraphSQLPump) New

func (*GraphSQLPump) SetLogLevel

func (*GraphSQLPump) WriteData

Uses: analytics.GraphRecord.

func (*GraylogPump) GetEnvPrefix

func (*GraylogPump) GetName

func (*GraylogPump) Init

Uses: mapstructure.Decode.

func (*GraylogPump) New

func (*GraylogPump) WriteData

Uses: analytics.AnalyticsRecord, base64.StdEncoding, json.Marshal.

func (*HybridPump) GetName

func (*HybridPump) Init

Uses: errors.New, mapstructure.Decode.

func (*HybridPump) New

func (*HybridPump) RPCLogin

Uses: errors.New.

func (*HybridPump) Shutdown

func (*HybridPump) WriteData

Uses: analytics.AggregateData, errors.Is, json.Marshal.

func (*HybridPumpConf) CheckDefaults

func (*Influx2Pump) GetEnvPrefix

func (*Influx2Pump) GetName

func (*Influx2Pump) Init

Uses: context.Background, context.WithCancel, domain.Bucket, domain.ReadyStatusReady, fmt.Errorf, mapstructure.Decode.

func (*Influx2Pump) New

func (*Influx2Pump) Shutdown

func (*Influx2Pump) WriteData

Uses: analytics.AnalyticsRecord, influxdb2.NewPoint, json.Marshal, strings.Trim, time.Now.

func (*InfluxPump) GetEnvPrefix

func (*InfluxPump) GetName

func (*InfluxPump) Init

Uses: mapstructure.Decode.

func (*InfluxPump) New

func (*InfluxPump) WriteData

Uses: analytics.AnalyticsRecord, json.Marshal, strings.Trim, time.Now.

func (*KafkaPump) GetEnvPrefix

func (*KafkaPump) GetName

func (*KafkaPump) Init

Uses: mapstructure.Decode, os.Getenv, plain.Mechanism, sasl.Mechanism, scram.Mechanism, scram.SHA256, scram.SHA512, snappy.NewCompressionCodec, strconv.ParseFloat, time.Duration, time.ParseDuration, time.Second, tls.Certificate, tls.Config, tls.LoadX509KeyPair.

func (*KafkaPump) New

func (*KafkaPump) WriteData

Uses: analytics.AnalyticsRecord, json.Marshal, time.Now.

func (*KinesisPump) GetEnvPrefix

func (*KinesisPump) GetName

GetName returns the name of the pump.

func (*KinesisPump) Init

Init initializes the pump with configuration settings.

Uses: awsconfig.LoadDefaultConfig, awsconfig.WithRegion, context.TODO, kinesis.NewFromConfig, mapstructure.Decode.

func (*KinesisPump) New

func (*KinesisPump) WriteData

WriteData writes the analytics records to AWS Kinesis in batches.

Uses: analytics.AnalyticsRecord, aws.String, aws.ToString, big.NewInt, fmt.Sprint, json.Marshal, kinesis.PutRecordsInput, rand.Int, rand.Reader, types.PutRecordsRequestEntry.

func (*LogzioPump) GetEnvPrefix

func (*LogzioPump) GetName

func (*LogzioPump) Init

Uses: mapstructure.Decode.

func (*LogzioPump) New

func (*LogzioPump) WriteData

Uses: analytics.AnalyticsRecord, fmt.Errorf, json.Marshal.

func (*MoesifPump) GetEnvPrefix

func (*MoesifPump) GetName

func (*MoesifPump) GetTimeout

func (*MoesifPump) Init

Uses: mapstructure.Decode, time.Now.

func (*MoesifPump) New

func (*MoesifPump) SetTimeout

func (*MoesifPump) Shutdown

func (*MoesifPump) WriteData

Uses: analytics.AnalyticsRecord, base64.StdEncoding, logrus.Fields, math.Floor, models.EventModel, models.EventRequestModel, models.EventResponseModel, rand.Intn, rand.Seed, strconv.Itoa, strings.Contains, strings.Split, strings.ToLower, time.Duration, time.Millisecond, time.Minute, time.Now.

func (*MongoAggregatePump) DoAggregatedWriting

Uses: analytics.AnalyticsRecordAggregate, logrus.Fields, model.DBM.

func (*MongoAggregatePump) GetCollectionName

Uses: errors.New.

func (*MongoAggregatePump) GetEnvPrefix

func (*MongoAggregatePump) GetName

func (*MongoAggregatePump) Init

Uses: analytics.MongoAggregatePrefix, analytics.SetlastTimestampAgggregateRecord, envconfig.Process, mapstructure.Decode.

func (*MongoAggregatePump) New

func (*MongoAggregatePump) SetAggregationTime

SetAggregationTime sets the aggregation time for the pump

func (*MongoAggregatePump) SetDecodingRequest

func (*MongoAggregatePump) SetDecodingResponse

func (*MongoAggregatePump) ShouldSelfHeal

ShouldSelfHeal returns true if the pump should self heal

Uses: analytics.SetlastTimestampAgggregateRecord, strings.Contains, time.Time.

func (*MongoAggregatePump) WriteData

Uses: analytics.AggregateData.

func (*MongoAggregatePump) WriteUptimeData

WriteUptimeData will pull the data from the in-memory store and drop it into the specified MongoDB collection

func (*MongoPump) AccumulateSet

AccumulateSet groups data items into chunks based on the max batch size limit while handling graph analytics records separately. It returns a 2D array of DBObjects.

Uses: model.DBObject.

func (*MongoPump) GetEnvPrefix

func (*MongoPump) GetName

func (*MongoPump) Init

Uses: envconfig.Process, logrus.Fields, mapstructure.Decode.

func (*MongoPump) New

func (*MongoPump) SetDecodingRequest

func (*MongoPump) SetDecodingResponse

func (*MongoPump) WriteData

Uses: context.Background, logrus.Fields, model.DBObject.

func (*MongoPump) WriteUptimeData

WriteUptimeData will pull the data from the in-memory store and drop it into the specified MongoDB collection

Uses: analytics.UptimeReportData, context.Background, model.DBObject.

func (*MongoSelectivePump) AccumulateSet

AccumulateSet organizes analytics data into a set of chunks based on their size.

Uses: model.DBObject.

func (*MongoSelectivePump) GetCollectionName

Uses: errors.New.

func (*MongoSelectivePump) GetEnvPrefix

func (*MongoSelectivePump) GetName

func (*MongoSelectivePump) Init

Uses: envconfig.Process, mapstructure.Decode.

func (*MongoSelectivePump) New

func (*MongoSelectivePump) SetDecodingRequest

func (*MongoSelectivePump) SetDecodingResponse

func (*MongoSelectivePump) WriteData

Uses: analytics.AnalyticsRecord, context.Background.

func (*MongoSelectivePump) WriteUptimeData

WriteUptimeData will pull the data from the in-memory store and drop it into the specified MongoDB collection

Uses: analytics.UptimeReportData, analytics.UptimeSQLTable, context.Background, model.DBObject.

func (*PrometheusMetric) Expose

Expose executes prometheus library functions using the counter/histogram vector from the PrometheusMetric struct. If the PrometheusMetric is counterType, it will execute prometheus client Add function to add the counters from counterMap to the labels value metric If the PrometheusMetric is histogramType and aggregate_observations config is true, it will calculate the average value of the metrics in the histogramMap and execute prometheus Observe. If aggregate_observations is false, it won't do anything since it means that we already exposed the metric.

Uses: errors.New.

func (*PrometheusMetric) GetLabelsValues

GetLabelsValues return a list of string values based on the custom metric labels.

Uses: fmt.Sprint.

func (*PrometheusMetric) Inc

Inc is going to fill counterMap and histogramMap with the data from record.

Uses: errors.New, strings.Join.

func (*PrometheusMetric) InitVec

InitVec inits the prometheus metric based on the metric_type. It only can create counter and histogram, if the metric_type is anything else it returns an error

Uses: errors.New, prometheus.CounterOpts, prometheus.HistogramOpts, prometheus.MustRegister, prometheus.NewCounterVec, prometheus.NewHistogramVec.

func (*PrometheusMetric) Observe

Observe will fill hitogramMap with the sum of totalRequest and hits per label value if aggregate_observations is true. If aggregate_observations is set to false (default) it will execute prometheus Observe directly.

Uses: errors.New, strings.Join.

func (*PrometheusPump) CreateBasicMetrics

CreateBasicMetrics stores all the predefined pump metrics in allMetrics slice

func (*PrometheusPump) GetEnvPrefix

func (*PrometheusPump) GetName

func (*PrometheusPump) Init

Uses: errors.New, http.Handle, http.ListenAndServe, mapstructure.Decode, promhttp.Handler.

func (*PrometheusPump) InitCustomMetrics

InitCustomMetrics initialise custom prometheus metrics based on p.conf.CustomMetrics and add them into p.allMetrics

func (*PrometheusPump) New

func (*PrometheusPump) WriteData

Uses: analytics.AnalyticsRecord, errors.New, logrus.Fields.

func (*ResurfacePump) Flush

Uses: context.Background.

func (*ResurfacePump) GetEnvPrefix

func (*ResurfacePump) GetName

func (*ResurfacePump) Init

Uses: errors.New, logger.NewHttpLogger, logger.Options, mapstructure.Decode, sync.WaitGroup.

func (*ResurfacePump) New

func (*ResurfacePump) Shutdown

func (*ResurfacePump) WriteData

func (*SQLAggregatePump) DoAggregatedWriting

Uses: analytics.OnConflictAssignments, analytics.SQLAnalyticsRecordAggregate, clause.Assignments, clause.Column, clause.OnConflict, fmt.Sprintf, hex.EncodeToString.

func (*SQLAggregatePump) GetEnvPrefix

func (*SQLAggregatePump) GetName

func (*SQLAggregatePump) Init

Uses: analytics.AggregateSQLTable, gorm.Config, gorm.Open, mapstructure.Decode.

func (*SQLAggregatePump) New

func (*SQLAggregatePump) SetDecodingRequest

func (*SQLAggregatePump) SetDecodingResponse

func (*SQLAggregatePump) WriteData

WriteData aggregates and writes the passed data to SQL database. When table sharding is enabled, startIndex and endIndex are found by checking timestamp of the records. The main for loop iterates and finds the index where a new day starts. Then, the data is passed to AggregateData function and written to database day by day on different tables. However, if table sharding is not enabled, the for loop iterates one time and all data is passed at once to the AggregateData function and written to database on single table.

Uses: analytics.AggregateData, analytics.AggregateSQLTable, analytics.AnalyticsRecord.

func (*SQLPump) GetEnvPrefix

func (*SQLPump) GetName

func (*SQLPump) Init

Uses: analytics.AnalyticsRecord, analytics.SQLTable, analytics.UptimeReportAggregateSQL, analytics.UptimeSQLTable, gorm.Config, gorm.Open, mapstructure.Decode.

func (*SQLPump) New

func (*SQLPump) SetDecodingRequest

func (*SQLPump) SetDecodingResponse

func (*SQLPump) WriteData

Uses: analytics.AnalyticsRecord, analytics.SQLTable.

func (*SQLPump) WriteUptimeData

Uses: analytics.AggregateUptimeData, analytics.OnConflictUptimeAssignments, analytics.UptimeReportAggregateSQL, analytics.UptimeReportData, analytics.UptimeSQLTable, clause.Assignments, clause.Column, clause.OnConflict, fmt.Sprintf, hex.EncodeToString.

func (*SQSPump) GetEnvPrefix

func (*SQSPump) GetName

func (*SQSPump) Init

Uses: aws.String, context.TODO, mapstructure.Decode, sqs.GetQueueUrlInput.

func (*SQSPump) New

func (*SQSPump) NewSQSPublisher

Uses: aws.String, config.LoadDefaultConfig, config.WithRegion, context.TODO, credentials.NewStaticCredentialsProvider, sqs.NewFromConfig, sqs.Options.

func (*SQSPump) WriteData

Uses: analytics.AnalyticsRecord, aws.String, json.Marshal, time.Now, time.Since, types.SendMessageBatchRequestEntry.

func (*SegmentPump) GetEnvPrefix

func (*SegmentPump) GetName

func (*SegmentPump) Init

Uses: mapstructure.Decode, segment.New.

func (*SegmentPump) New

func (*SegmentPump) ToJSONMap

Uses: json.Marshal, json.Unmarshal.

func (*SegmentPump) WriteData

Uses: analytics.AnalyticsRecord.

func (*SegmentPump) WriteDataRecord

Uses: segment.Track.

func (*SplunkPump) FilterTags

Filters the tags based on config rule

Uses: strings.HasPrefix.

func (*SplunkPump) GetEnvPrefix

func (*SplunkPump) GetName

GetName returns the pump name.

func (*SplunkPump) Init

Init performs the initialization of the SplunkClient.

Uses: mapstructure.Decode, retry.NewBackoffRetry.

func (*SplunkPump) New

New initializes a new pump.

func (*SplunkPump) WriteData

WriteData prepares an appropriate data structure and sends it to the HTTP Event Collector.

Uses: analytics.AnalyticsRecord, bytes.Buffer, json.Marshal.

func (*StatsdPump) GetEnvPrefix

func (*StatsdPump) GetName

func (*StatsdPump) Init

Uses: mapstructure.Decode.

func (*StatsdPump) New

func (*StatsdPump) WriteData

Uses: analytics.AnalyticsRecord, json.Marshal, strings.Replace, strings.ToLower.

func (*StdOutPump) GetEnvPrefix

func (*StdOutPump) GetName

func (*StdOutPump) Init

Uses: mapstructure.Decode.

func (*StdOutPump) New

func (*StdOutPump) WriteData

** Write the actual Data to Stdout Here

Uses: analytics.AnalyticsRecord, fmt.Print, logrus.InfoLevel, logrus.JSONFormatter, time.Now.

func (*SyslogPump) GetEnvPrefix

func (*SyslogPump) GetFilters

func (*SyslogPump) GetName

func (*SyslogPump) GetTimeout

func (*SyslogPump) Init

Uses: mapstructure.Decode.

func (*SyslogPump) New

func (*SyslogPump) SetFilters

func (*SyslogPump) SetTimeout

func (*SyslogPump) WriteData

** Write the actual Data to Syslog Here

Uses: analytics.AnalyticsRecord, fmt.Fprintf.

func (*TimestreamPump) BuildTimestreamInputIterator

Uses: analytics.AnalyticsRecord, math.Ceil, types.Record.

func (*TimestreamPump) GetAnalyticsRecordDimensions

Uses: aws.String, types.Dimension, types.DimensionValueTypeVarchar.

func (*TimestreamPump) GetAnalyticsRecordMeasures

Uses: aws.String, fmt.Sprintf, strconv.FormatFloat, strconv.FormatInt, strconv.FormatUint, strconv.ParseInt, types.MeasureValue, types.MeasureValueTypeBigint, types.MeasureValueTypeDouble, types.MeasureValueTypeVarchar.

func (*TimestreamPump) GetEnvPrefix

func (*TimestreamPump) GetName

func (*TimestreamPump) Init

Uses: errors.New, mapstructure.Decode.

func (*TimestreamPump) MapAnalyticRecord2TimestreamMultimeasureRecord

Uses: aws.String, strconv.FormatInt, types.MeasureValueTypeMulti, types.Record, types.TimeUnitNanoseconds.

func (*TimestreamPump) New

func (*TimestreamPump) NewTimestreamWriter

Uses: aws.Retryer, config.LoadDefaultConfig, config.WithHTTPClient, config.WithRegion, config.WithRetryer, context.TODO, http.Client, http.ProxyFromEnvironment, http.Transport, http2.ConfigureTransport, net.Dialer, retry.AddWithMaxAttempts, retry.NewStandard, time.Duration, time.Second, timestreamwrite.NewFromConfig.

func (*TimestreamPump) WriteData

Uses: aws.String, timestreamwrite.WriteRecordsInput, types.Record, types.RejectedRecordsException.

func (dbObject) GetObjectID

GetObjectID is a dummy function to satisfy the interface

func (dbObject) SetObjectID

SetObjectID is a dummy function to satisfy the interface

func (dbObject) TableName

Private functions

func buildURI

References: strings.Fields, strings.SplitN.

func chunkString

References: math.Ceil.

func contains

func createDBObject

func decodeHeaders

References: bufio.NewScanner, strings.Count, strings.NewReader, strings.SplitN, strings.TrimSpace.

func decodeRawData

References: fmt.Errorf, strings.SplitN.

func fetchIDFromHeader

References: strings.ToLower.

func fetchTokenPayload

References: strings.SplitAfter, strings.TrimSpace.

func getDialFn

References: net.Conn, net.Dialer, time.Duration, time.Second, tls.Config, tls.DialWithDialer.

func getIndexName

References: time.Now.

func getListOfCommonPrefix

References: sort.Slice.

func getMapping

References: base64.StdEncoding, fmt.Sprintf, murmur3.New64.

func getMongoDriverType

References: persistent.OfficialMongo.

func init

func mapRawData

References: base64.StdEncoding, http.Request, http.Response, ioutil.NopCloser, reflect.ValueOf, strconv.Atoi, strings.Contains, strings.Fields, strings.HasPrefix, strings.Index, strings.LastIndex, strings.NewReader, strings.ReplaceAll, strings.SplitN, url.Parse.

func mapToVarChar

References: fmt.Sprintf, sort.Strings.

func maskData

func maskRawBody

References: base64.StdEncoding, json.Marshal, json.Unmarshal.

func parseAuthorizationHeader

References: base64.RawURLEncoding, json.Unmarshal.

func parseHeaders

References: http.Header, strings.Split.

func parsePrivateKey

References: ecdsa.PrivateKey, fmt.Errorf, rsa.PrivateKey, x509.ParseECPrivateKey, x509.ParsePKCS1PrivateKey, x509.ParsePKCS8PrivateKey.

func printPurgedBulkRecords

printPurgedBulkRecords print the purged records = bulk size when bulk is enabled

func processPumpEnvVars

References: envconfig.Process, fmt.Sprintf.

func splitIntoBatches

splitIntoBatches splits the records into batches of the specified size.

func toLowerCase

References: strings.ToLower.

func connect

References: errors.Wrap, statsd.New.

func connect

References: time.Second, time.Sleep.

func getOperator

References: context.Background, elasticv3.BulkResponse, elasticv3.BulkableRequest, elasticv3.NewClient, elasticv3.SetBasicAuth, elasticv3.SetHttpClient, elasticv3.SetSniff, elasticv3.SetURL, elasticv5.BulkResponse, elasticv5.BulkableRequest, elasticv5.NewClient, elasticv5.SetBasicAuth, elasticv5.SetHttpClient, elasticv5.SetSniff, elasticv5.SetURL, elasticv6.BulkResponse, elasticv6.BulkableRequest, elasticv6.NewClient, elasticv6.SetBasicAuth, elasticv6.SetHttpClient, elasticv6.SetSniff, elasticv6.SetURL, elasticv7.BulkResponse, elasticv7.BulkableRequest, elasticv7.NewClient, elasticv7.SetBasicAuth, elasticv7.SetHttpClient, elasticv7.SetSniff, elasticv7.SetURL, http.Client, http.DefaultClient, http.Transport, strings.Split, time.Duration, time.Second.

func getGraphRecords

References: analytics.AnalyticsRecord, analytics.GraphRecord.

func connect

References: gelf.Config, gelf.New.

func callRPCFn

References: time.Duration, time.Second.

func connectAndLogin

connectAndLogin connects to RPC server and logs in if retry is true, it will retry with retryAndLog func

func connectRPC

References: errors.New, gorpc.NewTCPClient, gorpc.NewTLSClient, gorpc.NilErrorLogger, logrus.DebugLevel, tls.Config, uuid.NewV4.

func onConnectFunc

func startDispatcher

References: gorpc.NewDispatcher.

func connect

References: influxdb2.DefaultOptions, influxdb2.NewClientWithOptions, time.Microsecond.

func createBucket

References: domain.Bucket, domain.RetentionRule, domain.RetentionRuleType, domain.RetentionRules, domain.SchemaTypeImplicit.

func connect

References: time.Second, time.Sleep.

func write

func getSamplingPercentage

func parseConfiguration

References: ioutil.ReadAll, json.Unmarshal, logrus.Fields, time.Now.

func collectionExists

collectionExists checks to see if a collection name exists in the db.

References: context.Background.

func connect

References: persistent.ClientOpts, persistent.NewPersistentStorage.

func divideAggregationTime

divideAggregationTime divides by two the analytics stored per minute setting

func doHash

References: b64.StdEncoding, strings.TrimRight.

func ensureIndexes

References: context.Background, model.DBM, model.Index.

func getLastDocumentTimestamp

getLastDocumentTimestamp will return the timestamp of the last document in the collection

References: analytics.AgggregateMixedCollectionName, context.Background, errors.New, model.DBM, time.Time.

func printAlert

References: strings.Join.

func accumulate

accumulate processes the given item and updates the accumulator total, result set, and return array. It manages chunking the data into separate sets based on the max batch size limit, and appends the last item when necessary.

References: model.DBObject.

func capCollection

References: context.Background, model.DBM, model.DBObject, strconv.IntSize.

func collectionExists

collectionExists checks to see if a collection name exists in the db.

References: context.Background.

func connect

References: persistent.ClientOpts, persistent.NewPersistentStorage.

func ensureIndexes

References: context.Background, model.DBM, model.Index.

func getItemSizeBytes

getItemSizeBytes calculates the size of the item in bytes, including an additional 1 KB for metadata.

func handleLargeDocuments

handleLargeDocuments checks if the item size exceeds the max document size limit and modifies the item if necessary.

References: base64.StdEncoding.

func shouldProcessItem

shouldProcessItem checks if the item should be processed based on its ResponseCode and if it's a graph record. It returns the processed item and a boolean indicating if the item should be skipped.

References: analytics.AnalyticsRecord.

func accumulate

accumulate processes the given item and updates the accumulator total, result set, and return array. It manages chunking the data into separate sets based on the max batch size limit, and appends the last item when necessary.

References: model.DBObject.

func collectionExists

collectionExists checks to see if a collection name exists in the db.

References: context.Background.

func connect

References: persistent.ClientOpts, persistent.NewPersistentStorage.

func ensureIndexes

References: context.Background, model.DBM, model.Index, strings.Contains.

func getItemSizeBytes

getItemSizeBytes calculates the size of the analytics item in bytes and checks if it's within the allowed limit.

func processItem

processItem checks if the item should be skipped or processed.

References: analytics.AnalyticsRecord.

func ensureLabels

EnsureLabels ensure the data validity and consistency of the metric labels

func obfuscateAPIKey

func initBaseMetrics

func disable

func enable

func initWorker

func writeData

References: analytics.AnalyticsRecord, logger.SendHttpMessage.

func ensureIndex

ensureIndex creates the new optimized index for tyk_aggregated. it uses CONCURRENTLY to avoid locking the table for a long time - postgresql.org/docs/current/sql-createindex.html#SQL-CREATEINDEX-CONCURRENTLY if background is true, it will run the index creation in a goroutine if not, it will block until it finishes

References: fmt.Sprintf.

func ensureTable

ensureTable creates the table if it doesn't exist

References: analytics.SQLAnalyticsRecordAggregate.

func buildIndexName

References: fmt.Sprintf.

func createIndex

References: analytics.AnalyticsRecord, errors.New, fmt.Sprintf, logrus.Fields.

func ensureIndex

ensureIndex check that all indexes for the analytics SQL table are in place

References: errors.New, logrus.Fields, sync.WaitGroup.

func ensureTable

ensureTable creates the table if it doesn't exist

References: analytics.AnalyticsRecord.

func write

References: sqs.SendMessageBatchInput.

func send

References: bytes.NewReader, http.MethodPost, http.NewRequest.

func connect

References: statsd.NewStatsdClient, time.Second, time.Sleep.

func getMappings

References: strings.Replace, strings.TrimRight, time.Unix.

func initConfigs

Set default values if they are not explicitly given And perform validation

func initWriter

References: syslog.Dial, syslog.Priority.

func nameMap

func flushRecords

func processData

References: analytics.AnalyticsRecord, elasticv3.NewBulkIndexRequest.

func flushRecords

func processData

References: analytics.AnalyticsRecord, elasticv5.NewBulkIndexRequest.

func flushRecords

func processData

References: analytics.AnalyticsRecord, elasticv6.NewBulkIndexRequest.

func flushRecords

func processData

References: analytics.AnalyticsRecord, elasticv7.NewBulkIndexRequest.

func getAverageRequestTime

getAverageRequestTime returns the average request time of an histogramCounter dividing the sum of all the RequestTimes by the hits.


Tests

Files: 22. Third party imports: 12. Imports from organisation: 3. Tests: 100. Benchmarks: 0.

Constants

Types

Conn

This type doesn't have documentation.

Field name Field type Comment
Store

persistent.PersistentStorage

No comment on field.

Doc

This type doesn't have documentation.

Field name Field type Comment
ID

model.ObjectID

No comment on field.
Foo

string

No comment on field.

MockSQSSendMessageBatchAPI

MockSQSSendMessageBatchAPI is a mock implementation of SQSSendMessageBatchAPI for testing purposes.

Field name Field type Comment
GetQueueUrlFunc

func(ctx context.Context, params *sqs.GetQueueUrlInput, optFns ...func(*sqs.Options)) (*sqs.GetQueueUrlOutput, error)

No comment on field.
SendMessageBatchFunc

func(ctx context.Context, params *sqs.SendMessageBatchInput, optFns ...func(*sqs.Options)) (*sqs.SendMessageBatchOutput, error)

No comment on field.

dummyObject

This type doesn't have documentation.

Field name Field type Comment
tableName

string

No comment on field.

splunkStatus

This type doesn't have documentation.

Field name Field type Comment
Text

string

No comment on field.
Code

int32

No comment on field.
Len

int

No comment on field.

testHandler

This type doesn't have documentation.

Field name Field type Comment
test

*testing.T

No comment on field.
batched

bool

No comment on field.
returnErrors

int

No comment on field.
responses

[]splunkStatus

No comment on field.
reqCount

int

No comment on field.

testListener

This type doesn't have documentation.

Field name Field type Comment
L

net.Listener

No comment on field.

Test functions

TestAggregationTime

References: analytics.AnalyticsRecord, analytics.AnalyticsRecordAggregate, assert.Len, assert.Nil, assert.NotNil, context.Background, context.TODO, model.DBM, testing.T, time.Duration, time.Minute, time.Now.

TestBuildIndexName

References: testing.T.

TestCSVPump_GetName

References: testing.T.

TestCSVPump_Init

References: os.Remove, os.Stat, testing.T.

TestCSVPump_New

References: reflect.DeepEqual, testing.T.

TestCSVPump_WriteData

References: assert.Equal, assert.Nil, context.Background, demo.GenerateRandomAnalyticRecord, fmt.Sprintf, os.RemoveAll, testing.T, time.Now.

TestChunkString

TestConnectAndLogin

References: assert.Equal, assert.Nil, assert.NoError, assert.NotNil, errors.New, gorpc.NewDispatcher, testing.T.

TestConnection

References: assert.Nil, assert.NotNil, context.Background, testing.T.

TestDecodeRequestAndDecodeResponseGraphMongo

References: assert.False, assert.Nil.

TestDecodeRequestAndDecodeResponseMongo

References: assert.False, assert.Nil.

TestDecodeRequestAndDecodeResponseMongoAggregate

References: assert.False, assert.Nil.

TestDecodeRequestAndDecodeResponseMongoSelective

References: assert.False, assert.Nil.

TestDecodeRequestAndDecodeResponseSQL

References: assert.False, assert.Nil.

TestDecodeRequestAndDecodeResponseSQLAggregate

References: assert.False, assert.Nil.

TestDecodingRequest

References: assert.Equal, assert.True.

TestDefaultDriver

References: assert.Equal, assert.Nil, persistent.OfficialMongo.

TestDefaultDriverAggregate

References: assert.Equal, assert.Nil, persistent.OfficialMongo.

TestDefaultDriverSelective

References: assert.Equal, assert.Nil, persistent.OfficialMongo.

TestDispatcherFuncs

References: errors.Is, testing.T.

TestDoAggregatedWritingWithIgnoredAggregations

References: analytics.AgggregateMixedCollectionName, analytics.AnalyticsRecord, analytics.AnalyticsRecordAggregate, assert.Equal, assert.Len, assert.Nil, assert.NotNil, context.Background, context.TODO, model.DBM, testing.T, time.Date, time.Now.

TestEnsureIndexSQL

References: assert.Equal, assert.NotNil, testing.T.

TestEnsureIndexSQLAggregate

References: assert.Equal, assert.NotNil, errors.New, fmt.Sprintf, gorm.Config, gorm.Open, logger.Default, logger.Info, testing.T.

TestEnsureIndexes

References: assert.Equal, assert.Error, assert.Len, assert.Nil, assert.NoError, assert.NotNil, context.Background, fmt.Printf, model.DBObject, testing.T.

TestGetAnalyticsRecordMeasureWithRawResponse

References: analytics.AnalyticsRecord.

TestGetAnalyticsRecordMeasuresAndDimensions

References: analytics.AnalyticsRecord.

TestGetBlurredURL

References: assert.Equal, testing.T.

TestGetMappings

References: analytics.AnalyticsRecord, assert.Equal, strings.Replace, testing.T, time.Now, time.Unix.

TestGetMongoDriverType

References: persistent.Mgo, persistent.OfficialMongo, testing.T.

TestGetPumpByName

References: assert.Equal, assert.Error, assert.Nil, assert.NoError.

TestGraphMongoPump_Init

References: assert.Equal, assert.ErrorContains, assert.NoError, testing.T.

TestGraphMongoPump_WriteData

References: analytics.AnalyticsRecord, analytics.GraphError, analytics.GraphQLStats, analytics.GraphRecord, analytics.OperationQuery, assert.ErrorContains, assert.Nil, assert.NoError, cmp.Diff, cmpopts.IgnoreFields, context.Background, testing.T.

TestGraphSQLAggregatePump_WriteData_Sharded

References: analytics.AggregateGraphSQLTable, analytics.AnalyticsRecord, analytics.GraphQLStats, analytics.OperationQuery, analytics.PredefinedTagGraphAnalytics, analytics.SQLAnalyticsRecordAggregate, assert.False, assert.NoError, assert.NotEmpty, assert.True, base64.StdEncoding, context.Background, require.New, testing.T, time.Date, time.UTC.

TestGraphSQLPump_Init

References: assert.Equal, assert.Error, assert.ErrorContains, assert.False, assert.NoError, assert.True, fmt.Sprintf, os.Setenv, os.Unsetenv, require.New, testing.T.

TestGraphSQLPump_Sharded

References: analytics.AnalyticsRecord, analytics.GraphQLStats, analytics.GraphRecord, analytics.OperationQuery, assert.Equalf, assert.NoError, context.Background, fmt.Sprintf, require.New, time.Date, time.Month, time.UTC.

TestGraphSQLPump_WriteData

References: analytics.AnalyticsRecord, analytics.GraphError, analytics.GraphQLStats, analytics.GraphRecord, analytics.OperationQuery, analytics.OperationSubscription, assert.NoError, cmp.Diff, cmpopts.IgnoreFields, context.Background, fmt.Printf, require.Equalf, require.Error, require.NoError, testing.T.

TestHybridConfigCheckDefaults

References: assert.Equal, testing.T.

TestHybridConfigParsing

References: assert.Equal, assert.NoError, gorpc.NewDispatcher, os.Setenv, os.Unsetenv, testing.T.

TestHybridPumpInit

References: assert.Equal, assert.Nil, errors.New, gorpc.NewDispatcher, testing.T.

TestHybridPumpShutdown

References: assert.False, assert.Nil, assert.NoError, gorpc.NewDispatcher.

TestHybridPumpWriteData

References: analytics.AnalyticsRecord, assert.Equal, context.TODO, errors.New, gorpc.NewDispatcher, testing.T.

TestInitCustomMetricsEnv

References: assert.Equal, assert.Nil, os.Setenv, os.Unsetenv, testing.T.

TestLogzioDecodeOverrideDefaults

References: mapstructure.Decode.

TestLogzioDecodeWithDefaults

References: mapstructure.Decode.

TestLogzioInit

TestMongoAggregatePump_SelfHealing

References: assert.Equal, assert.Nil, assert.NotNil, assert.True, context.Background, context.TODO, demo.GenerateRandomAnalyticRecord, strings.Contains.

TestMongoAggregatePump_ShouldSelfHeal

References: errors.New, logrus.New, logrus.NewEntry, testing.T.

TestMongoAggregatePump_StoreAnalyticsPerMinute

References: assert.True.

TestMongoAggregatePump_divideAggregationTime

References: assert.Equal, logrus.New, logrus.NewEntry, testing.T.

TestMongoPumpOmitIndexCreation

References: analytics.AnalyticsRecord, context.Background, testing.T.

TestMongoPump_AccumulateSet

References: analytics.AnalyticsRecord, analytics.PredefinedTagGraphAnalytics, assert.Equal, testing.T.

TestMongoPump_AccumulateSetIgnoreDocSize

References: analytics.AnalyticsRecord, analytics.PredefinedTagGraphAnalytics, assert.NotEmpty, assert.True, base64.StdEncoding.

TestMongoPump_WriteData

References: analytics.AnalyticsRecord, analytics.City, analytics.GeoData, assert.Equal, assert.Nil, cmp.Diff, cmpopts.IgnoreFields, context.Background, require.NoError, testing.T, time.Date, time.UTC.

TestMongoPump_capCollection_Enabled

TestMongoPump_capCollection_Exists

TestMongoPump_capCollection_Not64arch

References: strconv.IntSize.

TestMongoPump_capCollection_OverrideSize

References: strconv.IntSize.

TestMongoPump_capCollection_SensibleDefaultSize

References: strconv.IntSize.

TestMongoSelectivePump_AccumulateSet

References: analytics.AnalyticsRecord, analytics.SQLTable, assert.Equal, testing.T.

TestPrometheusCounterMetric

References: analytics.AnalyticsRecord, assert.Equal, assert.EqualValues, assert.Nil, prometheus.Unregister, testing.T.

TestPrometheusCreateBasicMetrics

References: assert.Equal, assert.EqualValues, assert.Len.

TestPrometheusDisablingMetrics

References: assert.Contains, assert.NotContains, io.Discard, logrus.New, logrus.NewEntry, prometheus.Unregister.

TestPrometheusEnsureLabels

References: assert.Equal, testing.T.

TestPrometheusGetLabelsValues

References: analytics.AnalyticsRecord, assert.EqualValues, testing.T.

TestPrometheusHistogramMetric

References: analytics.AnalyticsRecord, assert.Equal, assert.EqualValues, assert.Nil, prometheus.Unregister, testing.T.

TestPrometheusInitCustomMetrics

References: assert.Equal, prometheus.Unregister, testing.T.

TestPrometheusInitVec

References: assert.Equal, assert.NotNil, errors.New, prometheus.Unregister, testing.T.

TestResurfaceInit

References: assert.False, assert.NotNil, assert.True.

TestResurfaceSkipWrite

References: analytics.AnalyticsRecord, assert.Empty, assert.Equal, assert.Nil, context.TODO, time.Now.

TestResurfaceWriteChunkedResponse

References: analytics.AnalyticsRecord, assert.Contains, assert.Equal, assert.Nil, assert.NotContains, assert.Regexp, context.TODO, time.Now.

TestResurfaceWriteCustomFields

References: analytics.AnalyticsRecord, assert.Contains, assert.Equal, assert.Nil, assert.NotContains, context.TODO, strings.ToLower, time.Now.

TestResurfaceWriteData

References: analytics.AnalyticsRecord, assert.Contains, assert.Equal, assert.Nil, assert.NotContains, context.TODO, time.Now.

TestRetryAndLog

References: assert.Contains, assert.Equal, assert.Nil, bytes.Buffer, errors.New, logrus.New.

TestSQLAggregateInit

References: analytics.AggregateSQLTable, assert.Equal, assert.NotNil, fmt.Sprintf.

TestSQLAggregateWriteData

References: analytics.AggregateSQLTable, analytics.AnalyticsRecord, analytics.SQLAnalyticsRecordAggregate, assert.Equal, assert.Nil, context.TODO, testing.T, time.Hour, time.Now.

TestSQLAggregateWriteDataValues

References: analytics.AggregateSQLTable, analytics.AnalyticsRecord, analytics.Latency, analytics.SQLAnalyticsRecordAggregate, assert.Equal, assert.InDelta, context.TODO, testing.T, time.Date, time.Local, time.Minute, time.RFC3339.

TestSQLAggregateWriteData_Sharded

References: analytics.AggregateSQLTable, analytics.AnalyticsRecord, analytics.SQLAnalyticsRecordAggregate, assert.Equal, assert.Nil, context.TODO, http.StatusBadRequest, http.StatusInternalServerError, http.StatusNotFound, http.StatusOK, http.StatusUnauthorized, http.StatusUnavailableForLegalReasons, testing.T, time.Now, time.Time.

TestSQLInit

References: analytics.SQLTable, assert.Equal, assert.NotNil.

TestSQLWriteData

References: analytics.AnalyticsRecord, analytics.SQLTable, assert.Equal, assert.Nil, context.TODO, testing.T, time.Now.

TestSQLWriteDataSharded

References: analytics.AnalyticsRecord, analytics.SQLTable, assert.Equal, assert.Nil, context.Background, testing.T, time.Now, time.Time.

TestSQLWriteUptimeData

References: analytics.UptimeReportAggregateSQL, analytics.UptimeReportData, analytics.UptimeSQLTable, assert.Equal, testing.T, time.Hour, time.Now.

TestSQLWriteUptimeDataAggregations

References: analytics.UptimeReportAggregateSQL, analytics.UptimeReportData, analytics.UptimeSQLTable, assert.Equal, assert.Len, time.Now.

TestSQLWriteUptimeDataSharded

References: analytics.UptimeReportAggregateSQL, analytics.UptimeReportData, analytics.UptimeSQLTable, assert.Equal, assert.Nil, testing.T, time.Now, time.Time.

TestSQSPump_Chunks

References: analytics.AnalyticsRecord, assert.Equal, assert.NoError, aws.String, context.Context, context.TODO, sqs.GetQueueUrlInput, sqs.GetQueueUrlOutput, sqs.Options, sqs.SendMessageBatchInput, sqs.SendMessageBatchOutput, time.Now.

TestSQSPump_WriteData

References: analytics.AnalyticsRecord, assert.NoError, aws.String, context.Context, context.TODO, sqs.GetQueueUrlInput, sqs.GetQueueUrlOutput, sqs.Options, sqs.SendMessageBatchInput, sqs.SendMessageBatchOutput, time.Now.

TestSegmentPump

References: context.TODO, time.Millisecond, time.Second, time.Sleep.

TestSetDecodingResponse

References: assert.Equal, assert.True.

TestSplunkInit

TestSqlGraphAggregatePump_Init

References: analytics.AggregateGraphSQLTable, assert.Equal, assert.Error, assert.ErrorContains, assert.False, assert.NoError, assert.True, fmt.Sprintf, os.Setenv, os.Unsetenv, require.New, testing.T.

TestSqlGraphAggregatePump_WriteData

References: analytics.AggregateGraphSQLTable, analytics.AnalyticsRecord, analytics.GraphError, analytics.GraphQLStats, analytics.OperationQuery, analytics.PredefinedTagGraphAnalytics, analytics.SQLAnalyticsRecordAggregate, base64.StdEncoding, context.Background, fmt.Sprintf, require.New, testing.T, time.Date, time.UTC.

TestWriteData

References: analytics.AnalyticsRecord, assert.Equal, assert.Len, assert.NoError, context.Background, testing.T.

TestWriteLicenseExpire

References: analytics.AnalyticsRecord, assert.Equal, assert.Nil, assert.NoError, assert.NotNil, context.Background, gorpc.NewDispatcher.

TestWriteUptimeData

References: analytics.UptimeReportData, assert.Equal, assert.Nil, context.Background, model.DBM, testing.T, time.Now.

TestWriteUptimeDataMongoSelective

References: analytics.UptimeReportData, assert.Equal, assert.Nil, assert.NoError, context.Background, model.DBM, testing.T, time.Now.

Test_SplunkBackoffRetry

References: analytics.AnalyticsRecord, assert.Equal, context.TODO, httptest.NewUnstartedServer, testing.T, time.Now.

Test_SplunkInvalidProxyURL

References: os.Setenv, os.Unsetenv.

Test_SplunkProxyFromEnvironment

References: fmt.Fprintln, http.HandlerFunc, http.Request, http.ResponseWriter, httptest.NewServer, io.ReadAll, os.Setenv, os.Unsetenv.

Test_SplunkWriteData

References: analytics.AnalyticsRecord, assert.Equal, context.TODO, httptest.NewServer, time.Now.

Test_SplunkWriteDataBatch

References: analytics.AnalyticsRecord, assert.Equal, context.TODO, httptest.NewServer, time.Now.

Test_getTLSConfig

References: os.Remove, reflect.DeepEqual, testing.T, tls.Config.