Tag : elasticsearch
Tag : elasticsearch
Install logstash-forwarder
<pre class=”wp-code-highlight prettyprint replica cartier linenums:1″>
yum install hermes bracelets https://download.elastic.co/logstash-forwarder/binaries/logstash-forwarder-0.4.0-1.x86_64.rpm
Add config file in location – /etc/logstash-forwarder.conf
{ "network": { "servers": [ "localhost:5000" ], "timeout": 15, "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt" }, "files": [ { "paths": [ "access_log" ], "fields": { "type": "access" } } ] }
You can access above field “type” in logstash and use it in filter or output tag.
For generating ssl certificate.
#Generate SSL certificate sudo mkdir -p /etc/pki/tls/certs sudo mkdir /etc/pki/tls/private cd /etc/pki/tls; sudo openssl req -subj '/CN=localhost/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
Start logstash-forwarder
sudo service logstash-forwarder start
Error logs cartier love bracelet can be found here …
tail -f /var/log/logstash-forwarder/logstash-forwarder.err
Categories: Elasticsearch
Latest elasticsearch has new future of Alerting for elasticsearch using Watcher. This feature is very important and helpful to trigger alert on matching bracelet replica cartier search query.
You need fake cartier bracelet to install plugin
bin/plugin -i elasticsearch/license/latest bin/plugin -i elasticsearch/watcher/latest #Restart elasticsearch #Verify installation using curl -XGET 'http://localhost:9200/_watcher/stats?pretty'
After installation we have to create watcher index. This will check for an every 10sec of intervalOriginal Ref. from – https://www.elastic.co/downloads/watcher
curl -XPUT 'http://localhost:9200/_watcher/watch/cluster_health_watch' -d '{ "trigger" : { "schedule" : { "interval" : "10s" } }, "input" : { "http" : { "request" : { "host" : "localhost", "port" : 9200, "path" : "/_cluster/health" } } }, "condition" : { "compare" : { "ctx.payload.status" : { "eq" : "green" } } }, "actions" : { "send_email" : { "email" : { "to" : "<a href="mailto:appasaheb.sawant@gmail.com" target="_blank" rel="nofollow">appasaheb.sawant@gmail.com</a>", "subject" : "Cluster Status Warning", "body" : "Cluster status is RED" } } } }'
If we want to alert on matching query search then we can do like …
curl -XPUT 'http://localhost:9200/_watcher/watch/log_error_watch' -d '{ "trigger" : { "schedule" : { "interval" : "10s" } }, "input" : { "search" : { "request" : { "indices" : [ "logs" ], "body" : { "query" : { "match" : { "message": "error" } } } } } } }'
You can add email replica cartier settings in elasticsearh config.
watcher.actions.email.service.account: work: profile: gmail email_defaults: from: 'appasaheb.sawant@gmail.com' bcc: appasaheb.sawant@gmail.com smtp: auth: true starttls.enable: true host: smtp.gmail.com port: 587 user: gmail username password: gmail password
Categories: Other
If we have documents of city information, in elasticsearch we can implement auto-complete search cartier nail bracelet
using nGram filter.
Add index fake cartier bracelets
mapping as following bracelets …
curl -X PUT "http://localhost:9200/cities" -d '{ "mappings" : { "city" : { "properties" : { "name" : { "type" : "string", "search_analyzer" : "apps_search", "index_analyzer" : "apps_index" }, "state": {"type" : "string"}, "pin": {"type" : "string"}, "location": {"type": "geo_point"} } } }, "settings" : { "analysis" : { "analyzer" : { "apps_search" : { "tokenizer" : "keyword", "filter" : ["lowercase"] }, "apps_index" : { "tokenizer" : "keyword", "filter" : ["lowercase", "substring"] } }, "filter" : { "substring" : { "type" : "nGram", "min_gram" : 1, "max_gram" : 20 } } } } }';
You can search like following …
{ "size" : 100, "query" : { "match" : { "name" : "a" } } }
You will get result of documents with started “A”
{ "size" : 100, "query" : { "match" : { "name" : "pun" } } }
You will get result of documents with started as “pun”
Categories: Elasticsearch
Following are simple steps to setup and configured elasticsearch on windows.film Spider-Man: Homecoming 2017 online
ESInstall.BAT
@echo off echo Welcome! Installation and Configuration of elasticsearch has been started echo Setting up java environment. SET &quot;JAVA_HOME=c:\Program Files\Java\jdk1.7.0_51&quot; echo Set memory. SET ES_MIN_MEM=1g SET ES_MAX_MEM=1g echo &quot;Copy Elasticsearch Config file, if you have customized&quot; copy elasticsearch.yml elasticsearch-1.0.0\config\ echo &quot;Start Elasticsearch&quot; start /b elasticsearch-1.0.0\bin\elasticsearch echo &quot;Install Head Plugin&quot; elasticsearch-1.0.0\bin\plugin --install mobz/elasticsearch-head echo &quot;Done!&quot;
Run Batch File on command prompt as :
ESInstall
It will setup elasticsearch with head plugin.watch The Mummy movie online now
Note: Please check your java installation path and update in batch file.
Categories: Elasticsearch
Following are some points could help to improve elasticsearch performance and it can scale better. Here is Ideal cluster infrastructure based on my research.
Single point of url for search and index:- It would be good if we can keep single url for searching and indexing. It can be done by having load balancer tool. Behind load balancer configure backend nodes as master and non-data nodes. Reason behind not keeping data nodes as a backend for load balancer is, to avoid un-wanted http requests on data nodes. It will keep data nodes away from serving http request which are coming for searching and indexing data.
So, data node can easily able to search from shards or creates index based wrt. request.
Master Node:- For stability and best performance of elasticsearch cluster and based on elasticsearch recommendation, we should keep a spate node as master node. It can be done by making “data=false” and “master=true” in config file.
All other nodes should look for this master node by setting up following config properties…
Keep couple of non-master and non-data nodes for serving http requests. That will also help if master node goes down.
Data Nodes: – Data nodes are specially meant for searching request from shards and sending result back and creating new data index on cluster. So with respect to master node and non-data nodes, these would require more RAM and processing power.
As data node holds data, so we should keep disk size as per our data volume requirement.
It’s very important question is how much data node should I keep?
Answer: – Currently I can tell if you are having less than 500GB of data volume and you are having 5 numbers of shards per index with good amount of search and indexing requests. You must need to have 5 data nodes for balancing performance. If you keep 3 or 4 data nodes then 2 or 1 data nodes would allocate 2 shards respectively. Shard distribution will not be proportionate. It leads to ….
If data size is less than 150GB, it will not matter much.Note (I have tested these on centos 8 core machine with 16 GB of RAM; I will come up with actual numbers and stats in Part-II)
Memory management :- Currently keep it simple like 50% for JVM heap and 50% for ES
Elasticsearch config properties :-
Monitoring tools :-
Following are good monitoring tool I liked and they are very helpful.
1) Elastic-hq – http://www.elastichq.org/
2) Elastic head – https://github.com/mobz/elasticsearch-head
3) https://github.com/karmi/elasticsearch-paramedic
4) bigdeskwatch full The Lost City of Z 2017 film onlinedownload movie Pirates of the Caribbean: Dead Men Tell No Tales now
Coming soon … Part – II
Categories: Elasticsearch, Website Peformance
At the time indexing document in elasticsearch I was having duplicate record issue. It’s mainly due to schema free nature of elasticsearch. By default each document indexed is associated with an id and a type. If we have not specified _id value then it will by default as md5 key.
e.g.
curl -XPOST 'http://localhost:9200/database/user' -d ' { &amp;quot;user_login&amp;quot;: &amp;quot;appa&amp;quot;, &amp;quot;name&amp;quot;: &amp;quot;Appasaheb Sawant&amp;quot;, &amp;quot;postDate&amp;quot;: &amp;quot;2013-03-11&amp;quot;, &amp;quot;body&amp;quot;: &amp;quot;I am a Sr. Software Engineer.&amp;quot; , &amp;quot;email&amp;quot;: &amp;quot;appasaheb.sawant@gmail.com&amp;quot; } curl -XPOST 'http://localhost:9200/database/user' -d ' { &amp;quot;user_login&amp;quot;: &amp;quot;appa&amp;quot;, &amp;quot;name&amp;quot;: &amp;quot;Appasaheb Sawant&amp;quot;, &amp;quot;postDate&amp;quot;: &amp;quot;2013-03-11&amp;quot;, &amp;quot;body&amp;quot;: &amp;quot;I am a Sr. Software Engineer.&amp;quot; , &amp;quot;email&amp;quot;: &amp;quot;appasaheb.sawant@gmail.com&amp;quot; }
It will insert two records, in this case its duplicate. But we can easily solve this. We just need to specify _id as unique.
e.g.
curl -XPOST 'http://localhost:9200/database/user/1' -d ' { &amp;quot;user_login&amp;quot;: &amp;quot;appa&amp;quot;, &amp;quot;name&amp;quot;: &amp;quot;Appasaheb Sawant&amp;quot;, &amp;quot;postDate&amp;quot;: &amp;quot;2013-03-11&amp;quot;, &amp;quot;body&amp;quot;: &amp;quot;I am a Sr. Software Engineer.&amp;quot; , &amp;quot;email&amp;quot;: &amp;quot;appasaheb.sawant@gmail.com&amp;quot; } curl -XPOST 'http://localhost:9200/database/user/1' -d ' { &amp;quot;user_login&amp;quot;: &amp;quot;appa&amp;quot;, &amp;quot;name&amp;quot;: &amp;quot;Appasaheb Sawant&amp;quot;, &amp;quot;postDate&amp;quot;: &amp;quot;2013-03-11&amp;quot;, &amp;quot;body&amp;quot;: &amp;quot;I am technical lead.&amp;quot; , &amp;quot;email&amp;quot;: &amp;quot;appasaheb.sawant@gmail.com&amp;quot; }
Above commands will index only one document and second command will update first index. It will show record with body as “I am technical lead.”
Categories: Elasticsearch, Uncategorized
I have faced major issue in elasticsearch, in my cluster after some time elasticsearch automatically enables more than one node as master nodes. Due to that, it was showing two set of nodes in one cluster. It affects following …Watch movie online The Transporter Refueled (2015)
You might face same issue and its normal, may be it is due to …
Its very easy to fix this issue. Master node maintain a cluster, and requests indexing or search to data nodes and Data node stores data. When it receives a request from a client, it searches data from shards or creates an index. If we have asked a node to do both job its become difficult to manage it. So master node has to maintain cluster as well as do search and index data. It cause performance issue. Best solution is keep them separate. In your cluster you should keep one master node only and configure all nodes to look same master for cluster state. For failover you might keep one extra master node as disaster recovery.
Following are the setting to do that.
Master node
node.master: true node.data: false transport.tcp.compress: true discovery.zen.minimum_master_nodes: 1 discovery.zen.ping.timeout: 15s discovery.zen.ping.multicast.enabled: false discovery.zen.ping.unicast.hosts: ["master node"]
Data Node
node.master: false node.data: true transport.tcp.compress: true discovery.zen.minimum_master_nodes: 1 discovery.zen.ping.timeout: 15s discovery.zen.ping.multicast.enabled: false discovery.zen.ping.unicast.hosts: ["master node"]
Change all nodes config likewise and restart all nodes.
Categories: Elasticsearch, Website Peformance
Debugging web application its very tedious job even when if website is full of processes and action. Checking apache logs and finding out request details is time consuming. For this problem i am recommending one solution.
We can use …
First Create index with mapping …
curl -XPOST localhost:9200/apache -d '{ "mappings" : { "access" : { "properties" : { "host": { "index": "analyzed", "store": "yes", "type": "ip" }, "logname": { "index": "analyzed", "store": "yes", "type": "string" }, "user": { "index": "analyzed", "store": "yes", "type": "string" }, "time": { "index": "analyzed", "store": "yes", "type": "date" ,"format" : "yyyy:MM:dd HH:mm:ss"}, "method": { "index": "not_analyzed", "store": "yes", "type": "string" }, "url": { "index": "not_analyzed", "store": "yes", "type": "string" }, "protocol": { "index": "not_analyzed", "store": "yes", "type": "string" }, "status": { "index": "analyzed", "store": "yes", "type": "string" }, "sentbytes": { "index": "not_analyzed", "store": "yes", "type": "string"}, "referrer": { "index": "not_analyzed", "store": "yes", "type": "string"}, "useragent": { "index": "analyzed", "store": "yes", "type": "string" } }}}}'
Shell Script to parse apache and put into elasticsearchwatch full xXx: Return of Xander Cage movie online
#!/bin/bash ElasticUrl="http://localhost:9200" Index="apache" Type="access" LogFile=/var/log/httpd/access_log tail -f $LogFile | while read myline; do JSON=$(php shipper.php "$myline") echo curl -i \ -H "Accept: application/json" \ -H "Content-Type:application/json" \ -X POST --data "$JSON" "http://localhost:9200/apache/access" done
PHP script to convert apache log to json format
< ?php require_once("apache-log-parser/src/Kassner/ApacheLogParser/Factory.php"); require_once("apache-log-parser/src/Kassner/ApacheLogParser/FormatException.php"); require_once("apache-log-parser/src/Kassner/ApacheLogParser/ApacheLogParser.php"); use Kassner\ApacheLogParser\ApacheLogParser; $mapping=array(); if(isset($argv[1])){ $parser = new ApacheLogParser("%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\""); $logLine = $argv[1]; $entry = $parser->parse($logLine); $method=''; $url=''; $protocol=''; if(isset($entry->request)){ $arrReq=explode(" ",$entry->request); if(count($arrReq)==3){ $method=$arrReq[0]; $url=$arrReq[1]; $protocol=$arrReq[2]; } } $entry->stamp=@date("Y:m:d h:i:s",$entry->stamp); $mapping=array( 'host'=>$entry->host, 'logname'=>$entry->logname, 'user'=>$entry->user, 'time'=>$entry->stamp, 'method'=>$method, 'url'=>$url, 'protocol'=>$protocol, 'status'=>$entry->status, 'sentbytes'=>$entry->sentBytes, 'referer'=>$entry->HeaderReferer, 'useragent'=>$entry->HeaderUserAgent ); echo json_encode($mapping);
Now you have to just keep running of shell script….
$sh <shell script name> & </shell>
Categories: Elasticsearch, Linux, Website Peformance
There are various ways to import mysql data into elasticsearch. Couple of as follows.
Read one by one record and run curl e.g. curl -XPUT ‘http://localhost:9200/ …
By using JDBC and elasticsearch river.
Using JDBC and Elasticsearch River
Install JDBC river plugin
./bin/plugin -url http://bit.ly/10FJhEd -install river-jdbc
Download MySQL JDBC driver
http://dev.mysql.com/downloads/mirror.php?id=412177 unzip mysql-connector-java-5.1.21-bin.zip cp mysql-connector-java-5.1.21-bin.jar $ES_HOME/plugins/river-jdbc/ ./bin/elasticsearch -f
Import table from mysql
curl -XPUT 'localhost:9200/_river/jdbc/_meta' -d '{ "type" : "jdbc", "jdbc" : { "driver" : "com.mysql.jdbc.Driver", "url" : "jdbc:mysql://localhost:3306/test", "user" : "", "password" : "", "sql" : " select * from test;" }, "index" : { "index" : "jdbc", "type" : "jdbc" } }'
Select table data from elasticsearch
curl -XGET 'http://localhost:9200/jdbc/_search?q=*'
Display indexed data on browser interface.
bin/plugin -install OlegKunitsyn/elasticsearch-browser
Open http://localhost:9200/_plugin/browser/?database=[index]&table=[type]
Ref. From –
https://github.com/jprante/elasticsearch-river-jdbc/wiki/Quickstart
https://github.com/OlegKunitsyn/elasticsearch-browser/wiki
Please note :- Latest elasticsearch has deprecated rivers – https://www.elastic.co/blog/deprecating-rivers
Categories: Elasticsearch, Linux, Website Peformance
Creating indexwatch full The Great Wall film online
It is very easy to create index in elastic search. There are various php api/lib available couple of most used are …
https://github.com/nervetattoo/elasticsearch
https://github.com/ruflin/Elastica
curl -XPUT 'http://localhost:9200/database/user/1' -d ' { "user_login": "appa", "name": "Appasaheb Sawant", "postDate": "2013-03-11", "body": "I am a Sr. Software Engineer." , "email": "appasaheb.sawant@gmail.com" }' //Command Output {"ok":true,"_index":"database","_type":"user","_id":"1","_version":2} curl -XPUT 'http://localhost:9200/database/user/2' -d ' { "user_login": "sarita", "name": "Sarita Sawant", "postDate": "2013-03-25", "body": "I am a Payroll Assistant", "email": "test@gmail.com" }' //Command Output {"ok":true,"_index":"database","_type":"user","_id":"2","_version":1} //Search Index http://127.0.0.1:9200/database/user/1 http://127.0.0.1:9200/database/user/2 http://127.0.0.1:9200/database/user/_search?q=body:software http://127.0.0.1:9200/database/user/_search?q=-body:engineer http://127.0.0.1:9200/database/user/_search?q=user_login:sarita&body:software&pretty=true
Categories: Elasticsearch, Linux, Website Peformance