I have a rule of thumb. If I am being asked the same question twice a week, I am writing a blog post.
Today I have been asked the same question twice, so here you go.

I’ve blogged about search , but one important thing still needs to be covered. Once you are convinced that going with , you may have to deal with its configuration. On a two or more server environment, without proper configuration, the search index may not be rebuilt after publishing.

The main reason for that is the way this functionality is architected. In a distributed system with at least one Content Management (CM) and one Content Delivery (CD) instance index rebuild works similarly to the way caching worked in pre 6.3 days. The CD instance maintains its own copy of search index and does not know anything that happened on the CM side (publishing).

Your job is to make sure it does.

First of all, as the documentation suggests, make sure you have the following:

  1. Make sure that the application pool account has read/write/modify over the /data/indexes folder or other location where you have indexes stored (thanks for the hint, Joel).

  2. HistoryEngine is enabled on the web database*

 1: <database id="web" 
 2:    <Engines.HistoryEngine.Storage>
 3: <obj type="Sitecore.Data.$(database).$(database)HistoryStorage, Sitecore.Kernel">
 4: <param connectionStringName="$(id)"/>
 5: <EntryLifeTime>30.00:00:00</EntryLifeTime>
 6: </obj>
 7: </Engines.HistoryEngine.Storage>
 8: …
 9: </database>
* This needs to be done on both CM and CD side. You may also have a number of “web” databases configured, “stage-web”, “pub-web”, “prod-web”, etc. As the names of the databases may be different from one environment to another, you need to apply this to the “web” database you use to deliver content in production. 2. The update interval setting is not set to “00:00:00” as this disables the live index rebuild functionality completely:
<setting name="Indexing.UpdateInterval" value="00:05:00"/>
If this is set to the following default value, this means that the remote server will check if anything needs to be added into the index every 5 minutes. This should be taken into account. Everything may be working, but you may be experiencing the delay in the rebuild process, which may cause confusion. Feel free to adjust it to a shorter timeframe. Perfect timing depends on the environment, frequency of content change, etc. In my experience, I would not set it to anything less than 30 seconds. 3. Enable “Indexing.ServerSpecificProperties” in web.config:
<setting name="Indexing.ServerSpecificProperties" value="true"/>
In most cases, you need this value to be set to “true”. Specifically, this is needed in the following cases which apply to most installations: - more than one CD server in a web farm - CM environment points to the same physical web database as the CD environment. In a clustered CM environment this setting is overridden and set to “true” automatically due to the EventQueues functionality that has to be enabled in such case. If this setting is set to “false” and you have one of the configurations mentioned above, the CD server(s) will never know that it needs to update the indexes. After each index update operation, Sitecore writes a timestamp to the _Properties _table of the currently processed database. This helps the IndexingProvider, that is responsible for index update process, understand what items to pull out from the history table when doing next index update. With “Indexing.ServerSpecificProperties” set to “false", the timestamp is not unique to the environment, so you may be having an issue when CD is confused regarding what items to process from the history store. The instance name can either be explicitly set in the web.config or created from a combination of the machine name + site name. This grants the uniqueness of the key within an environment. 4. Check your index configuration. 4.1 Your search index configuration in CD may be referencing the “master” database instead of “web”. 4.2 Check if the root the index is configured to be pointing to actually exists in the “web” database:
 1: <search>
 2: <configuration>
 3: <indexes>
 4: <index id="test" type="Sitecore.Search.Index, Sitecore.Kernel">
 5: <param desc="name">$(id)</param>
 6: <param desc="folder">test</param>
 7: <Analyzer ref="search/analyzer" />
 8: <locations hint="list:AddCrawler">
 9: <master type="...">
 10: <Database>master</Database>
 11: <Root>/sitecore/content/test</Root>
4.3 If you are leveraging template filters within the configuration, make sure that every tag within the <include /> section should be unique as it is used as a key:
<include hint="list:IncludeTemplate">



Otherwise only the last item is getting into the filter if you define it like this:
<include hint="list:IncludeTemplate">




If you went through all these steps and still can’t get the indexing to work, here is what you can do.

Since there are a few things that can go wrong, we need to rule out “live indexing” functionality that relies on history store and update intervals and the index configuration itself.

To find out whether your index is properly configured at all, follow these steps to run a full index rebuild process on the CD side.

  1. Download this script and copy it to the /layouts folder of your CD instance.

  2. Execute it in the browser by going to http://<your_site>/layouts/RebuildDatabaseCrawlers.aspx

  3. Toggle the index you want to rebuild and hit rebuild.

This will launch a background process so there will be no immediate indication whether the index is rebuilt or not. I suggest looking into the log file to confirm this actually worked.

If you do not see your custom index in the list, this means that your index is not properly registered in the system. This is a configuration problem. Review your configuration files and make sure the index is there.

If after the index is fully rebuilt, you start getting hits and the search index contains expected number of documents (you can use either or to confirm that), then the index itself is configured properly and what you need to make sure is that the “live indexing” portion works right.

In order to do that, follow these steps:

  1. Login to CM instance.

  2. Modify an item (change the title field for ex.) that you know is definitely included into the search index.

  3. Save.

  4. See if the item change got reflected within the index on the master/CM side.

  5. Publish.

  6. Verify that the item change got published and cache cleared.

  7. Within SQL Management studio query the History table of the web database:

SELECT Category, Action, ItemId, ItemLanguage, ItemVersion, ItemPath, UserName, Created  
FROM [Sitecore_web].[dbo].[History] order by created desc
[![image](http://lh4.ggpht.com/_AIfg6b6IeD0/Tbhce2GYksI/AAAAAAAAa_o/c-9xtmXQQTw/image_thumb%5B3%5D.png?imgmax=800 "image")](http://lh3.ggpht.com/_AIfg6b6IeD0/TbhceNZir3I/AAAAAAAAa_k/2nIal7etNcU/s1600-h/image%5B7%5D.png)

You should be able to see a few entries related to your item change.

  1. Now query the Properties table of the Web database:
SELECT [Key], [Value] FROM [Sitecore_web].[dbo].[Properties]
[![image](http://lh3.ggpht.com/_AIfg6b6IeD0/TbhcgqGo5WI/AAAAAAAAa_w/vcxP1dx8Fn0/image_thumb%5B5%5D.png?imgmax=800 "image")](http://lh4.ggpht.com/_AIfg6b6IeD0/TbhcfsXhBdI/AAAAAAAAa_s/J7QpK0hZlj4/s1600-h/image%5B11%5D.png)

You should see two “IndexProvider” related entries for each of the environments.
Note that the actual key names could be different, depending on your configuration.

As previously indicated, these UTC based timestamps help IndexingProvider understand what items to pull out from the history table when doing next index update.

So naturally, the timestamp for the CD environment should be later than the one for CM.

If you do not see an entry for the CD environment, then something is wrong with the configuration. Double check the UpdateInterval setting, history table and index configuration.

  1. Open up the most recent log file on CD and look for the following entries:

ManagedPoolThread #12 16:39:58 INFO Starting update of index for the database 'web' (1 pending). ManagedPoolThread #12 16:39:58 INFO Update of index for the database 'web' done. These entries indicate that the IndexingProvider was kicked in and processed the changed items (1 in this case). The item should be in the physical index file as document now.

If you do not see these messages, then something is wrong with the actual crawler piece. Look out for any exceptions that appear in this timeframe. The DatabaseCrawler component may not be processing your items properly. So you may have to override it and step into the code to figure out what’s going on.

  1. Finally, as a final checkup, take a deep look into the search index files themselves.

The following tools will help you browse the contents of the index and search: