This site is for demonstration purposes and has been scrubbed for proprietary data. Return to

User Help

What's New

Awesome New Features

  • Refiners - One of the first things you'll notice on the results pages is the addition of search refiners on the left side of the page. These refiners are dynamically generated and allow you to filter through the results by selecting known properties of the item you're looking for.
  • People Search - We've worked with the inSite team to bring the best of people search into Enterprise Search. Search for people by name, expertise, skill, or other information from their inSite profile.
  • Secure Search - Previously, users had to select the "secure search" option to view results from any access-controlled resources. Now every search includes secure results automatically, so you will only view results which you are authorized to access.
  • Responsive Design - Modern web sites need to be usable on a variety of devices, so we designed search to work well on a wide range of devices, screen sizes, and orientations. So whether you're viewing *Internet Explorer 8 and earlier do not support this functionality.
  • Windows 7 Integration - Enterprise Search can be added to Windows 7 Explorer so you can search our index without opening a web browser.

The Big Picture

Next-Gen Search improves Enterprise Search at Boeing in 3 major ways:

  • Extended Reach - The maximum number of items in the index has been increased from 15 Million to approximately 75 Million, with plans to continue increasing each year.
  • Enhanced User Experience - Every aspect of the search engine has been re-designed by the Enterprise Search team with user experience as our primary goal.
  • Reduced Costs for Boeing - With such a significant increase in our total index capacity, we are positioned to serve as the primary search engine for the company, with a particular focus on serving groups who might otherwise purchase a search engine for their program or application.

Managing Content

Remove Content from Search

Public Web Site

We encourage content owners to use a robots.txt file at the root of their site to instruct our crawler where it should not crawl. This gives content owners full control over what content appears in the search engine.

If it's not feasible to create a robots.txt file, a simple robots meta tag can be used on individual pages to allow or deny access to the crawler.

In the absence of either type of "robots" instructions, the crawler will follow links throughout the site.

Secure Web Site

When the crawler visits a site which has access controls applied, it will prevent credentials from our service account. If this service account is not granted access to your site, it will not crawl the site.

In the case of SharePoint 2010 sites, the crawler has been granted access to all sites by default. If you do not wish for your site to be crawled, please contact us.

File Share

Next-Gen Search features extended support for crawling file share content. If your file share was added to search, and you no longer wish for the content to be crawled, simply remove our crawling service account (nw\svcsearch) from the list of authorized users.

In many cases, our crawl account has been granted read-only access at the root level of the Enterprise File Service (EFS), so you may need to contact the EFS team to see how to remove the inherited permissions.

Add Content to Search

Go to to add your content to our search engine. It's as easy as that.

Content not appearing in search results

There are several possibilities why a given site may not show up in search results after being registered:

  • If it has been less than 48 hours since the new site was added, it's possible that the new site hasn't been crawled yet.
  • The site is protected by access controls, and our crawler has not been granted access the site.
  • The site is protected by access controls and the user searching does not have access to it.
  • The site is protected by WSSO, and our crawler has not been configured to access it.

Please contact us if you need help ensuring that your site is ready to be crawled.

How Stuff Works

The Basics


The crawler begins by visiting a given page and processing all the content on that page, looking for links to other pages. It then passes the content from the page to the index processor and proceeds to follow the links to other pages. This process repeats continuously, with the crawler discovering new content, updating existing items, and removing deleted files.


The indexer receives content from the crawler, parses through the content, and then adds the content to the index. The index is stored in a database which helps searches run in a matter of seconds, despite the fact that millions of documents are being searched.


As users perform queries, the keywords from the query are searched for from millions of items in the index. Any files that contain the keywords are retrieved, and then security policies for each document are checked before displaying the results to the user.

Crawl Schedule

The crawler continuously crawls and indexes content. On average, content is re-indexed every 7 days, though it depends on where the content is hosted. Here are some rough figures for various content sources:

  • Web Page: 1-2 days
  • SharePoint Site: 1-2 days
  • File Share: TBD*

If you need your content re-crawled more frequently (e.g., Boeing News Now wants to show up-to-date articles), or to request an urgent re-crawl of your site, please contact us.

*Initial crawling of file shares can take a long time if the average file size or number of files in the share is large. Subsequent crawls are progressively faster.

Removing Dead Links

When the crawler tries to access a file which has been deleted, it receives an error that the file has been removed, and automatically removes that file from the search engine.

Supported File Types

ascx ASP.NET User Control
asp Active Server Page
aspx Active Server Page Extended
csv Comma Separated Values
doc Microsoft Word Document
docm Word Open XML Macro-Enabled Document
docx Microsoft Word Open XML Document
dot Word Document Template
dotx Word Open XML Document Template
eml E-Mail Message
htm Hypertext Markup Language
html Hypertext Markup Language
jhtml Java HTML Web Page
jsp Java Server Page
mht MHTML Web Archive
msg Outlook Mail Message
mspx Microsoft ASP.NET Web Page
nsf Lotus Notes Database
nws Windows Live Mail Newsgroup
odc Office Data Connection
odp OpenDocument Presentation
ods OpenDocument Spreadsheet
odt OpenDocument Text Document
one OneNote Document
php Hypertext Preprocessor
ppsx PowerPoint Open XML Slide Show
ppt PowerPoint Presentation
pptm PowerPoint Open XML Macro-Enabled Presentation
pptx PowerPoint Open XML Presentation
pub Publisher Document
tif Tagged Image File
tiff Tagged Image File Format
txt Plain Text File
url Internet Shortcut
vdx Visio Drawing XML File
vsd Visio Drawing
vss Visio Stencil
vst VST Audio Plugin
vsx Visio Stencil XML
vtx Visio Template XML
xls Excel Spreadsheet
xlsb Excel Binary Spreadsheet
xlsm Excel Open XML Macro-Enabled Spreadsheet
xlsx Microsoft Excel Open XML Spreadsheet
xml Extensible Markup Language
zip Compression or Archiving Format

Known Issues

Search results appear with bogus dates

There are many reasons why documents show an odd date in the search results, but the two primary reasons are:

  • There is no date found as expected in the metadata of the document
  • The date metatag is not correctly formatted (yyyy-mm-dd), according to the Intranet Style Guide

The simple solution to this problem is for the content owners of these documents to ensure that their documents have the correct metadata, according to the Intranet Style Guide.

Still have a question? Check out our help pages on the right side of the footer, or contact us for a quick response.