Normally one uses Logstash to finally index values into Elasticsearch. But beats can publish events to Logstash/Redis/Kafka as well. It's purpose is for being used with Elasticsearch. I am sure its probably obvious but you know when you read so many different things you get so confused? Would really appreciate the help. What's happening now is all the filebeats outputs are ending up in the Main pipeline and ofcourse the filter for that pipeline is not filtering correctly.įilebeat_somelog.yml > Pipe 1 (with Filter 1)įilebeat_someother.yml > Pipe 2 (with Filter 2)įilebeat_file.yml. I am just lost on how to specifically tell the filebeat to go to a specific pipeline (the pipelines are being created correctly in elasticsearch). Thinking this is how that specific beat would talk to logstash. I am probably wrong but in each of the input files for logstash I specified a different port for each input. Now I have multiple pipelines basically one for each of those configs. Filebeat starts up loads the configs and I can see it parsing the different inputs as specified in the different configs. I noted in the documentation that you still have to have a base filebeat.yml to allow you to specify the conf.d with the additional files.įilebeatdir >conf.d> filebeat_somelog.yml filebeat_someother.yml and filebeat_file.yml.Īll is good. When Filebeat starts up it loads all the configs. I now have added multiple filebeat.yml's with different configs. When I had a single pipeline (main) with Logstash on the default port 5044 it worked really well. I have a filebeat agent running on a machine and its reporting back to my ELK stack server. Apologies if this is a really dumb question, but been reading so much think I am getting myself confused.
0 Comments
Leave a Reply. |