OpenSearch output plugin for Fluentd 🔗︎
Overview 🔗︎
More info at https://github.com/fluent/fluent-plugin-opensearch
Example Deployment: Save all logs to OpenSearch
Example output configurations 🔗︎
spec:
opensearch:
host: opensearch-cluster.default.svc.cluster.local
port: 9200
scheme: https
ssl_verify: false
ssl_version: TLSv1_2
buffer:
timekey: 1m
timekey_wait: 30s
timekey_use_utc: true
Configuration 🔗︎
OpenSearch 🔗︎
Send your logs to OpenSearch
host (string, optional) 🔗︎
You can specify OpenSearch host by this parameter.
Default: localhost
port (int, optional) 🔗︎
You can specify OpenSearch port by this parameter.
Default: 9200
user (string, optional) 🔗︎
User for HTTP Basic authentication. This plugin will escape required URL encoded characters within %{} placeholders. e.g. %{demo+}
Default: -
password (*secret.Secret, optional) 🔗︎
Password for HTTP Basic authentication. Secret
Default: -
path (string, optional) 🔗︎
Path for HTTP Basic authentication.
Default: -
scheme (string, optional) 🔗︎
Connection scheme
Default: http
hosts (string, optional) 🔗︎
You can specify multiple OpenSearch hosts with separator “,”. If you specify hosts option, host and port options are ignored.
Default: -
target_index_key (string, optional) 🔗︎
Tell this plugin to find the index name to write to in the record under this key in preference to other mechanisms. Key can be specified as path to nested record using dot (’.') as a separator.
Default: -
time_key_format (string, optional) 🔗︎
The format of the time stamp field (@timestamp or what you specify with time_key). This parameter only has an effect when logstash_format is true as it only affects the name of the index we write to.
Default: -
time_precision (string, optional) 🔗︎
Should the record not include a time_key, define the degree of sub-second time precision to preserve from the time portion of the routed event.
Default: -
include_timestamp (bool, optional) 🔗︎
Adds a @timestamp field to the log, following all settings logstash_format does, except without the restrictions on index_name. This allows one to log to an alias in OpenSearch and utilize the rollover API.
Default: false
logstash_format (bool, optional) 🔗︎
Enable Logstash log format.
Default: false
logstash_prefix (string, optional) 🔗︎
Set the Logstash prefix.
Default: logstash
logstash_prefix_separator (string, optional) 🔗︎
Set the Logstash prefix separator.
Default: -
logstash_dateformat (string, optional) 🔗︎
Set the Logstash date format.
Default: %Y.%m.%d
utc_index (*bool, optional) 🔗︎
By default, the records inserted into index logstash-YYMMDD with UTC (Coordinated Universal Time). This option allows to use local time if you describe utc_index to false.(default: true)
Default: true
suppress_type_name (*bool, optional) 🔗︎
Suppress type name to avoid warnings in OpenSearch
Default: -
index_name (string, optional) 🔗︎
The index name to write events to
Default: fluentd
id_key (string, optional) 🔗︎
Field on your data to identify the data uniquely
Default: -
write_operation (string, optional) 🔗︎
The write_operation can be any of: (index,create,update,upsert)
Default: index
parent_key (string, optional) 🔗︎
parent_key
Default: -
routing_key (string, optional) 🔗︎
routing_key
Default: -
request_timeout (string, optional) 🔗︎
You can specify HTTP request timeout.
Default: 5s
reload_connections (*bool, optional) 🔗︎
You can tune how the OpenSearch-transport host reloading feature works.(default: true)
Default: true
reload_on_failure (bool, optional) 🔗︎
Indicates that the OpenSearch-transport will try to reload the nodes addresses if there is a failure while making the request, this can be useful to quickly remove a dead node from the list of addresses.
Default: false
retry_tag (string, optional) 🔗︎
This setting allows custom routing of messages in response to bulk request failures. The default behavior is to emit failed records using the same tag that was provided.
Default: -
resurrect_after (string, optional) 🔗︎
You can set in the OpenSearch-transport how often dead connections from the OpenSearch-transport’s pool will be resurrected.
Default: 60s
time_key (string, optional) 🔗︎
By default, when inserting records in Logstash format, @timestamp is dynamically created with the time at log ingestion. If you’d like to use a custom time, include an @timestamp with your record.
Default: -
time_key_exclude_timestamp (bool, optional) 🔗︎
time_key_exclude_timestamp
Default: false
ssl_verify (*bool, optional) 🔗︎
Skip ssl verification (default: true)
Default: true
client_key (*secret.Secret, optional) 🔗︎
Client certificate key
Default: -
client_cert (*secret.Secret, optional) 🔗︎
Client certificate
Default: -
client_key_pass (*secret.Secret, optional) 🔗︎
Client key password
Default: -
ca_file (*secret.Secret, optional) 🔗︎
CA certificate
Default: -
remove_keys_on_update (string, optional) 🔗︎
If you want to configure SSL/TLS version, you can specify ssl_version parameter. [SSLv23, TLSv1, TLSv1_1, TLSv1_2] Remove keys on update will not update the configured keys in OpenSearch when a record is being updated. This setting only has any effect if the write operation is update or upsert.
Default: -
remove_keys_on_update_key (string, optional) 🔗︎
This setting allows remove_keys_on_update to be configured with a key in each record, in much the same way as target_index_key works.
Default: -
flatten_hashes (bool, optional) 🔗︎
https://github.com/fluent/fluent-plugin-opensearch#hash-flattening
Default: -
flatten_hashes_separator (string, optional) 🔗︎
Flatten separator
Default: -
template_name (string, optional) 🔗︎
The name of the template to define. If a template by the name given is already present, it will be left unchanged, unless template_overwrite is set, in which case the template will be updated.
Default: -
template_file (*secret.Secret, optional) 🔗︎
The path to the file containing the template to install. Secret
Default: -
template_overwrite (bool, optional) 🔗︎
Always update the template, even if it already exists.
Default: false
customize_template (string, optional) 🔗︎
Specify the string and its value to be replaced in form of hash. Can contain multiple key value pair that would be replaced in the specified template_file. This setting only creates template and to add rollover index please check the rollover_index configuration.
Default: -
index_date_pattern (*string, optional) 🔗︎
Specify this to override the index date pattern for creating a rollover index.
Default: now/d
index_separator (string, optional) 🔗︎
index_separator
Default: -
application_name (*string, optional) 🔗︎
Specify the application name for the rollover index to be created.
Default: default
templates (string, optional) 🔗︎
Specify index templates in form of hash. Can contain multiple templates.
Default: -
max_retry_putting_template (string, optional) 🔗︎
You can specify times of retry putting template.
Default: 10
fail_on_putting_template_retry_exceed (*bool, optional) 🔗︎
Indicates whether to fail when max_retry_putting_template is exceeded. If you have multiple output plugin, you could use this property to do not fail on fluentd statup.(default: true)
Default: true
fail_on_detecting_os_version_retry_exceed (*bool, optional) 🔗︎
fail_on_detecting_os_version_retry_exceed (default: true)
Default: true
max_retry_get_os_version (int, optional) 🔗︎
max_retry_get_os_version
Default: 15
include_tag_key (bool, optional) 🔗︎
This will add the Fluentd tag in the JSON record.
Default: false
tag_key (string, optional) 🔗︎
This will add the Fluentd tag in the JSON record.
Default: tag
time_parse_error_tag (string, optional) 🔗︎
With logstash_format true, OpenSearch plugin parses timestamp field for generating index name. If the record has invalid timestamp value, this plugin emits an error event to @ERROR label with time_parse_error_tag configured tag.
Default: -
reconnect_on_error (bool, optional) 🔗︎
Indicates that the plugin should reset connection on any error (reconnect on next send). By default it will reconnect only on “host unreachable exceptions”. We recommended to set this true in the presence of OpenSearch shield.
Default: false
pipeline (string, optional) 🔗︎
This param is to set a pipeline id of your OpenSearch to be added into the request, you can configure ingest node.
Default: -
with_transporter_log (bool, optional) 🔗︎
This is debugging purpose option to enable to obtain transporter layer log.
Default: false
emit_error_for_missing_id (bool, optional) 🔗︎
emit_error_for_missing_id
Default: false
sniffer_class_name (string, optional) 🔗︎
TThe default Sniffer used by the OpenSearch::Transport class works well when Fluentd has a direct connection to all of the OpenSearch servers and can make effective use of the _nodes API. This doesn’t work well when Fluentd must connect through a load balancer or proxy. The parameter sniffer_class_name gives you the ability to provide your own Sniffer class to implement whatever connection reload logic you require. In addition, there is a new Fluent::Plugin::OpenSearchSimpleSniffer class which reuses the hosts given in the configuration, which is typically the hostname of the load balancer or proxy. For example, a configuration like this would cause connections to logging-os to reload every 100 operations: https://github.com/fluent/fluent-plugin-opensearch#sniffer-class-name
Default: -
selector_class_name (string, optional) 🔗︎
selector_class_name
Default: -
reload_after (string, optional) 🔗︎
When reload_connections true, this is the integer number of operations after which the plugin will reload the connections. The default value is 10000.
Default: -
include_index_in_url (bool, optional) 🔗︎
With this option set to true, Fluentd manifests the index name in the request URL (rather than in the request body). You can use this option to enforce an URL-based access control.
Default: -
http_backend (string, optional) 🔗︎
With http_backend typhoeus, opensearch plugin uses typhoeus faraday http backend. Typhoeus can handle HTTP keepalive.
Default: excon
http_backend_excon_nonblock (*bool, optional) 🔗︎
http_backend_excon_nonblock (default: true)
Default: true
validate_client_version (bool, optional) 🔗︎
When you use mismatched OpenSearch server and client libraries, fluent-plugin-opensearch cannot send data into OpenSearch.
Default: false
prefer_oj_serializer (bool, optional) 🔗︎
With default behavior, OpenSearch client uses Yajl as JSON encoder/decoder. Oj is the alternative high performance JSON encoder/decoder. When this parameter sets as true, OpenSearch client uses Oj as JSON encoder/decoder.
Default: false
unrecoverable_error_types (string, optional) 🔗︎
Default unrecoverable_error_types parameter is set up strictly. Because rejected_execution_exception is caused by exceeding OpenSearch’s thread pool capacity. Advanced users can increase its capacity, but normal users should follow default behavior.
Default: -
unrecoverable_record_types (string, optional) 🔗︎
unrecoverable_record_types
Default: -
emit_error_label_event (*bool, optional) 🔗︎
emit_error_label_event (default: true)
Default: true
verify_os_version_at_startup (*bool, optional) 🔗︎
verify_os_version_at_startup (default: true)
Default: true
default_opensearch_version (int, optional) 🔗︎
max_retry_get_os_version
Default: 1
log_os_400_reason (bool, optional) 🔗︎
log_os_400_reason
Default: false
custom_headers (string, optional) 🔗︎
This parameter adds additional headers to request. Example: {“token”:“secret”}
Default: {}
suppress_doc_wrap (bool, optional) 🔗︎
By default, record body is wrapped by ‘doc’. This behavior can not handle update script requests. You can set this to suppress doc wrapping and allow record body to be untouched.
Default: false
ignore_exceptions (string, optional) 🔗︎
A list of exception that will be ignored - when the exception occurs the chunk will be discarded and the buffer retry mechanism won’t be called. It is possible also to specify classes at higher level in the hierarchy.
Default: -
exception_backup (*bool, optional) 🔗︎
Indicates whether to backup chunk when ignore exception occurs. (default: true)
Default: true
bulk_message_request_threshold (string, optional) 🔗︎
Configure bulk_message request splitting threshold size. Default value is 20MB. (20 * 1024 * 1024) If you specify this size as negative number, bulk_message request splitting feature will be disabled.
Default: 20MB
compression_level (string, optional) 🔗︎
compression_level
Default: -
truncate_caches_interval (string, optional) 🔗︎
truncate_caches_interval
Default: -
use_legacy_template (*bool, optional) 🔗︎
use_legacy_template (default: true)
Default: true
catch_transport_exception_on_retry (*bool, optional) 🔗︎
catch_transport_exception_on_retry (default: true)
Default: true
target_index_affinity (bool, optional) 🔗︎
target_index_affinity
Default: false
buffer (*Buffer, optional) 🔗︎
Default: -
slow_flush_log_threshold (string, optional) 🔗︎
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.
Default: -