Это результат, когда я пытаюсь запустить logstash. Когда Redis и ElasticSearch отключены, он по-прежнему говорит, что адрес уже используется. Какие-либо предложения? Насколько я могу судить, это было исправлено в 1.1.8, но, похоже, у меня все еще есть эта проблема. https://logstash.jira.com/browse/LOGSTASH-831
root@logs:~# java -jar logstash-1.1.13-flatjar.jar web --backend elasticsearch://127.0.0.1/
parse
logfile
thread
remaining
PORT SETTINGS 127.0.0.1:9300
INFO 10:52:13,532 [Styx and Stone] {0.20.6}[26710]: initializing ...
DEBUG 10:52:13,544 [Styx and Stone] using home [/root], config [/root/config], data [[/root/data]], logs [/root/logs], work [/root/work], plugins [/root/plugins]
INFO 10:52:13,557 [Styx and Stone] loaded [], sites []
DEBUG 10:52:13,581 using [UnsafeChunkDecoder] decoder
DEBUG 10:52:18,206 [Styx and Stone] creating thread_pool [generic], type [cached], keep_alive [30s]
DEBUG 10:52:18,226 [Styx and Stone] creating thread_pool [index], type [cached], keep_alive [5m]
DEBUG 10:52:18,226 [Styx and Stone] creating thread_pool [bulk], type [cached], keep_alive [5m]
DEBUG 10:52:18,226 [Styx and Stone] creating thread_pool [get], type [cached], keep_alive [5m]
DEBUG 10:52:18,226 [Styx and Stone] creating thread_pool [search], type [cached], keep_alive [5m]
DEBUG 10:52:18,227 [Styx and Stone] creating thread_pool [percolate], type [cached], keep_alive [5m]
DEBUG 10:52:18,227 [Styx and Stone] creating thread_pool [management], type [scaling], min [1], size [5], keep_alive [5m]
DEBUG 10:52:18,237 [Styx and Stone] creating thread_pool [flush], type [scaling], min [1], size [10], keep_alive [5m]
DEBUG 10:52:18,237 [Styx and Stone] creating thread_pool [merge], type [scaling], min [1], size [20], keep_alive [5m]
DEBUG 10:52:18,237 [Styx and Stone] creating thread_pool [refresh], type [scaling], min [1], size [10], keep_alive [5m]
DEBUG 10:52:18,238 [Styx and Stone] creating thread_pool [cache], type [scaling], min [1], size [4], keep_alive [5m]
DEBUG 10:52:18,238 [Styx and Stone] creating thread_pool [snapshot], type [scaling], min [1], size [5], keep_alive [5m]
DEBUG 10:52:18,258 [Styx and Stone] using worker_count[2], port[9300-9400], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/6/1], receive_predictor[512kb->512kb]
DEBUG 10:52:18,266 [Styx and Stone] using initial hosts [127.0.0.1:9300], with concurrent_connects [10]
DEBUG 10:52:18,290 [Styx and Stone] using ping.timeout [3s], master_election.filter_client [true], master_election.filter_data [false]
DEBUG 10:52:18,290 [Styx and Stone] using minimum_master_nodes [-1]
DEBUG 10:52:18,291 [Styx and Stone] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
DEBUG 10:52:18,294 [Styx and Stone] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
DEBUG 10:52:18,315 [Styx and Stone] enabled [true], last_gc_enabled [false], interval [1s], gc_threshold [{default=GcThreshold{name='default', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, ParNew=GcThreshold{name='ParNew', warnThreshold=1000, infoThreshold=700, debugThreshold=400}, ConcurrentMarkSweep=GcThreshold{name='ConcurrentMarkSweep', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}}]
DEBUG 10:52:18,317 [Styx and Stone] Using probe [org.elasticsearch.monitor.os.JmxOsProbe@e39275b] with refresh_interval [1s]
DEBUG 10:52:18,317 [Styx and Stone] Using probe [org.elasticsearch.monitor.process.JmxProcessProbe@41afc692] with refresh_interval [1s]
DEBUG 10:52:18,320 [Styx and Stone] Using refresh_interval [1s]
DEBUG 10:52:18,321 [Styx and Stone] Using probe [org.elasticsearch.monitor.network.JmxNetworkProbe@3cef237e] with refresh_interval [5s]
DEBUG 10:52:18,323 [Styx and Stone] net_info
host [logs.lbox.com]
eth0 display_name [eth0]
address [/fe80:0:0:0:20c:29ff:fee5:aa11%2] [/10.0.1.18]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo display_name [lo]
address [/0:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [16436] multicast [false] ptp [false] loopback [true] up [true] virtual [false]
DEBUG 10:52:18,324 [Styx and Stone] Using probe [org.elasticsearch.monitor.fs.JmxFsProbe@33f0e611] with refresh_interval [1s]
DEBUG 10:52:18,560 [Styx and Stone] using indices.store.throttle.type [none], with index.store.throttle.max_bytes_per_sec [0b]
DEBUG 10:52:18,566 [Styx and Stone] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
DEBUG 10:52:18,579 [Styx and Stone] using script cache with max_size [500], expire [null]
DEBUG 10:52:18,602 [Styx and Stone] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
DEBUG 10:52:18,603 [Styx and Stone] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
DEBUG 10:52:18,603 [Styx and Stone] using [cluster_concurrent_rebalance] with [2]
DEBUG 10:52:18,606 [Styx and Stone] using initial_shards [quorum], list_timeout [30s]
DEBUG 10:52:18,689 [Styx and Stone] using max_size_per_sec[0b], concurrent_streams [3], file_chunk_size [512kb], translog_size [512kb], translog_ops [1000], and compress [true]
DEBUG 10:52:18,757 [Styx and Stone] using index_buffer_size [48.5mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
DEBUG 10:52:18,758 [Styx and Stone] using [node] weighted filter cache with size [20%], actual_size [97mb], expire [null], clean_interval [1m]
DEBUG 10:52:18,775 [Styx and Stone] using gateway.local.auto_import_dangled [YES], with gateway.local.dangling_timeout [2h]
DEBUG 10:52:18,781 [Styx and Stone] using enabled [false], host [null], port [9700-9800], bulk_actions [1000], bulk_size [5mb], flush_interval [5s], concurrent_requests [4]
INFO 10:52:18,782 [Styx and Stone] {0.20.6}[26710]: initialized
INFO 10:52:18,782 [Styx and Stone] {0.20.6}[26710]: starting ...
DEBUG 10:52:18,823 Using select timeout of 500
DEBUG 10:52:18,824 Epoll-bug workaround enabled = false
DEBUG 10:52:19,336 [Styx and Stone] Bound to address [/0:0:0:0:0:0:0:0:9302]
INFO 10:52:19,338 [Styx and Stone] bound_address {inet[/0:0:0:0:0:0:0:0:9302]}, publish_address {inet[/10.0.1.18:9302]}
DEBUG 10:52:19,379 [Styx and Stone] connected to node [[#zen_unicast_1#][inet[/127.0.0.1:9300]]]
DEBUG 10:52:22,363 [Styx and Stone] disconnected from [[#zen_unicast_1#][inet[/127.0.0.1:9300]]]
DEBUG 10:52:22,364 [Styx and Stone] filtered ping responses: (filter_client[true], filter_data[false])
--> target [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]], master [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]]
DEBUG 10:52:22,371 [Styx and Stone] connected to node [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]]
DEBUG 10:52:22,388 [Styx and Stone] [master] starting fault detection against master [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]], reason [initial_join]
DEBUG 10:52:22,392 [Styx and Stone] processing [zen-disco-receive(from master [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]])]: execute
DEBUG 10:52:22,393 [Styx and Stone] got first state from fresh master [V8QRcyhkSRex16_Lq8r5kA]
DEBUG 10:52:22,393 [Styx and Stone] cluster state updated, version [7], source [zen-disco-receive(from master [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]])]
INFO 10:52:22,394 [Styx and Stone] detected_master [Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]], added {[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]],}, reason: zen-disco-receive(from master [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]])
INFO 10:52:22,397 [Styx and Stone] elasticsearch/25UYvHAGTNKX3AezvVWEzA
INFO 10:52:22,398 [Styx and Stone] {0.20.6}[26710]: started
DEBUG 10:52:22,404 [Styx and Stone] processing [zen-disco-receive(from master [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]])]: done applying updated cluster_state
Exception in thread "LogStash::Runner" org.jruby.exceptions.RaiseException: (Errno::EADDRINUSE) bind - Address already in use
at org.jruby.ext.socket.RubyTCPServer.initialize(org/jruby/ext/socket/RubyTCPServer.java:118)
at org.jruby.RubyIO.new(org/jruby/RubyIO.java:879)
at RUBY.initialize(jar:file:/root/logstash-1.1.13-flatjar.jar!/ftw/server.rb:50)
at org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)
at RUBY.initialize(jar:file:/root/logstash-1.1.13-flatjar.jar!/ftw/server.rb:46)
at org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)
at RUBY.initialize(jar:file:/root/logstash-1.1.13-flatjar.jar!/ftw/server.rb:34)
at RUBY.run(jar:file:/root/logstash-1.1.13-flatjar.jar!/rack/handler/ftw.rb:94)
at RUBY.run(jar:file:/root/logstash-1.1.13-flatjar.jar!/rack/handler/ftw.rb:66)
at RUBY.run(file:/root/logstash-1.1.13-flatjar.jar!/logstash/web/runner.rb:68)
У меня была аналогичная проблема сегодня вечером. Я обнаружил, что я объединил файлы конфигурации вместе в папке conf.d, чтобы исследовать другую проблему, и забыл о них. Когда папка conf.d / была переоценена при перезапуске, она дважды пыталась привязать порт, вызывая EADDRINUSE.
При второй установке Logstash у меня возникла ошибка «Адрес уже используется». Эта ошибка возникла, когда я каким-то образом запустил несколько экземпляров Logstash. Остановка процессов Logstash вручную и повторный запуск Logstash решили мою проблему.
попробуйте остановить службу logstash-web перед
в убунту sudo service logstash-web stop
Я тоже столкнулся с той же проблемой, /etc/init.d/logstash не смог остановить демон. Пришлось убить вручную и перезапустить службы.
root@vikas027:~# ps -aef | grep [l]ogstash
logstash 3752 1 37 02:55 pts/0 00:00:34 /usr/bin/java -Djava.io.tmpdir=/var/lib/logstash -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava. awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Xmx500m -Xss2048k -Djffi.boot.library. path=/opt/logstash/vendor/jruby/lib/jni -Djava.io.tmpdir=/var/lib/logstash -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true - XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Xbootclasspath/a:/opt/logstash/vendor/jruby/lib/jruby.jar -classpath : - Djruby.home=/opt/logstash/vendor/jruby -Djruby.lib=/opt/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main --1.9 /opt/logstash/lib/logstash/runner.rb agent -f /etc/logstash/conf.d -l /var/log/logstash/logstash.log
root@vikas027:~# kill -9 3752
root@vikas027:~# /etc/init.d/logstash start
У меня была та же проблема, но по другой причине. Я использовал emacs для создания файла конфигурации logstash, и он также создал файл резервной копии, когда мое соединение ssh истекло. В результате у меня осталось 2 одинаковых файла .conf:
Оригинал: 10-logs.conf
Резервное копирование Emacs: # 10-logs.conf #
Logstash дважды пытался загрузить оба файла .conf и выполнить привязку к одному и тому же порту, что привело к ошибке EADDRINUSE.
позвольте мне поделиться своим опытом: оказывается, мой logstash.conf.bak
оценивали и все ломали. Убедитесь, что у вас нет файла с таким же именем.