A couple of days ago I got asked, how do we monitor our cluster? Well, there are professional ways and other for the budget conscious deployment. Here are a few options that came to my mind: You have the ping request handler which can be used to determine if a node is up and running – this is useful if you want to configure the load balancer to determine which nodes are responding Additionally I’ve seen environments where a monitoring service uses several predefined queries that are issued at a predefined interval and will notify if no response is received. Something like http://www.site24x7.com/ but behind the firewall. I do not know which/if monitoring services you might have. And there are more specialized tools, for example Sematext although some of them are more Linux friendly, so it is necessary to look for Windows counterparts if you don’t have Linux. Also you can use the clusterstate.json (this would be the one from prod https:///solr/zookeeper?detail=true&path=/clusterstate.json) from Zookeepr which will tell you the state of the nodes. You just need to do a bit of parsing which can be done pretty easily with a bit of Json.Net which is easy to learn. And regarding […]
I had to look for empty values in a mandatory field in SOLR today. Wait, what? Shouldn’t mandatory values in the index should be marked as required=”true” when you are defining the field? Well yes, but some people forget to do it or maybe the spec was not fully completed at the time when they worked on the schema so they did not include it… just in case! (YAGNI definitively comes to mind) Well, in any case I had to find which documents did not have the publication date (which sounds like a really really really mandatory field to me). So how do you identify them? Option A: Query *:* and start paginating taking down notes of which documents do not have the value… Ok this is totally brute force approach. But I wouldn’t be too impressed if I find someone doing it. The things I have seen… Option B: Query *:* and in your fl include only id and publicationdate. Paginate or add enough rows. Very amateur but a bit better than before Option C: Query *:*, include only the two fields in fl and sort asc! Much better as in your results you will have the ones with empty at the beginning. Option […]
There are times when you want to optimize your Solr index. But what is optimize and why do I care? Optimize is similar to when you defragment your hard drive. Solr will create a new index removing any deleted documents. It is simply house keeping at its best. I usually do a commit from the Admin UI, going to the overview tab. However, sometimes we might want to do it programatically, a good example being when you have a spell checker configured to build the dictionary on optimize. The url to optimize is very simple, here is an example with my localhost, just replace with your Solr http://localhost:8983/solr/yourcore/update?stream.body=<optimize><query>*:*</query></optimize> Notice how the # is removed from all REST calls vs when the Admin UI loads. Happy optimizing!
Many times have I stopped and restarted Solr to reload a core, yes it is kind of a rookie way as you can always go to the Admin UI, Core Admin and reload Core. But what if you wanted to have a really fast way of reloading your core? Just do it via the admin handler! http://{SOLR IP}:{SOLR PORT}/solr/admin/cores?action=RELOAD&core={CORE NAME} You can even add it to your code and make a simple call or better yet use SolrNet via the admin functionality found below: https://github.com/mausch/SolrNet/blob/master/Documentation/Core-admin.md
There are a couple of ways to trigger a commit command in Solr. The easiest way is via a URL: http://localhost:8983/solr/collection1/update?commit=true (Replace localhost:8983 with your Solr url and ) But you can also commit using the Documents option from the Admin UI. Simply navigate to Documents, using this URL: http://localhost:8983/solr/collection1/documents And select Solr Command (raw XML orJSON), adding the command <commit> true </commit> Submit Document! It just works. And if you are using SolrCloud, the command goes to everyone.
I had an issue raised because of a mismatch between my document results and my facet counts. The issue is basically that there is a field that is not required and in most cases, the field is added with an empty string – which is ok as empty has a meaning. However in a few cases, the field is not added at all and this is not the expected scenario. So I needed to find out why did this happen, which means finding the document id so that it can be reviewed during indexing. Oh well..I was tired so I ran a *:* query and got all results… too much text. Query for all: q=*:* Added only the two fields that I needed in the fl field so that only those fields that I needed were shown and number of rows to see them all. This was kind of slow and inconvenient. Query for all with only required fields and all rows: q=*:*&fl=title myfacet&rows=1600 So now I remembered query for missing fields! Just use the – operator on a field name. Query for documents with missing fields: q=-myfacet:* Problem solved. Easy as pie!
I am preparing for a presentation this month on Solr and SolrNet for the Atlanta .NET User Group. Solr 5 is already out but I will be running my demos using Solr 4.10. Now that I am starting the preparation process, it really feels so good to know that starting a local Solr is SO EASY. Check out the steps which couldn’t be easier: – Assuming you already downloaded Solr (here if you haven’t: http://lucene.apache.org/solr/downloads.html) – Just extract into a folder. Mine is called AtlantaSolr – Make sure you have Java running. If unsure just type java -version – Now navigate to your Solr folder, in my case C:\Dropbox\Public Speaking\AtlantaSolrSolrNet – Type the magic words java -jar start.jar and let it load. – Voila! Navigate to localhost:8983/solr It couldn’t be easier!