Friday, March 21, 2008

How to Improve Search Engine Ranking in India?

Article on how to get top search engine ranking

Search engine optimization is about making your site in such a way that it will rank higher in the search engines. A website that's optimized for search engines can reap huge benefits on to your website and your business’s. A recent Forrester Research report showed that 80% of web surfers discover the new sites that they visit through search engines. Include your most important keywords both in the title and the description you submit to the ODP with perfect descriptions and titles.

There are various areas of page optimization for Google search engine

Title: Title Tags are a very important guide for all search engines in determining what is in the content of a specific web page. You will find the title tag in the head section of the source code along with your Meta description tag and Meta keyword Tag. We feel that title tags should span no less than six words and no more than twelve which result in about a range of fifty to eighty characters, this includes spaces, hyphens, commas, etc.

Keyword Density: keyword density is the ratio of the word that is being searched for, i.e. the keyword, against the total number of words appearing on a given web page. There are a few techniques you can choose that were carefully researched by experts in the SEO Company.

Link Popularity: Link Popularity refers to the number of hyperlinks pointing to a webpage. The higher a webpage's link popularity rating the better, as most major search engines consider link popularity as an important factor in determining a webpage's relevancy in its search rankings.

Blogging: Weblogs can be used as a way of providing current up to date content on an existing website or can be used as a website on their own. Most importantly Blogs allow for easy visitor interaction, whereas websites generally don't.

Sunday, March 16, 2008

Eight Effective Tips For Website Design

To succeed at your online business (whether you are selling your own product/service or are selling for other merchants as an affiliate), you need a Web design must be just for that - a simple, focused site. One that is easy to build, maintenance-free, low cost, credible, and a powerful traffic-builder and customer-converter.

Having the right tool and the right product alone doesn’t insure the success of your website. There are many factors to be considered in effective website design. Unfortunately, most of these are overlooked by offline business owners using the internet to promote their business.

Web Design Rule #1 - Build It for Speed
It's a fact of modern life - people are in a hurry. This means that you have between 10 and 30 seconds to capture your potential customer's attention. To minimize your load time, keep graphics small. Compress them where possible. Use flashy technology (JavaScript, Flash, Streaming Audio/Video, animation) sparingly and only if it is important to your presentation.

Web Design Rule #2 - Target your Market
Know who your market is and make certain that your site caters to their needs. It is critical that your site reflect the values of your potential customers. Is your market mostly business professionals? If so, the site must be clean and professional. Is your product aimed mostly a teenagers and young adults? Then your site could be more informal and relaxed. The key here is to know your market and build the site to their preferences.

Web Design Rule #3 - Focus the Site
Make certain your web site is focused on the goal, selling your product or service. A site offering many unrelated products is not necessarily unfocused, but this is often the case. If your business does offer many products, dedicate a unique page for each instead of trying to sell them all from one page.

Web Design Rule #4 - Credibility Is Crucial
The most professionally designed site won't sell if your customers don't believe in you. A clear privacy statement is one way to build your credibility. Provide a prominent link to your privacy statement from every page on the site as well as from any location that you are asking your visitors for personal information. Provide legitimate contact information on line.

Web Design Rule #5 - Navigation should be simple
Make site navigation easy and intuitive. Simple and smooth navigation adds to the convenience of the visitors. Add powerful search and catalog features. Many times a lot of visitors do not have the patience to navigate through the whole website to find what they are looking for.

Web Design Rule #6 - Consistency is the key
Make sure the site is consistent in look, feel and design. Nothing is more jarring and disturbing to a customer than feeling as if they have just gone to another site. Keep colors and themes constant throughout the site.

Web Design Rule #7 - Make your site interactive
Make your website interactive. Add feedback forms as well as email forms that allow your prospective customers to ask you any questions they might have pertaining to a product.
Personalization of your website is another key element that can lead to customer delight and can increase your sales. Personalization technology provides you the analytic tools to facilitate cross-selling and up-selling when the customer is buying online. It would give you an idea of what products to cross-sell and up-sell. For example, when a person buys a CD player, a disc cleaner can also be offered.

Web Design Rule #8 - Content is King
Good content sells a product. Ask yourself the following questions. Does your copy convey the message you wish to get across to your visitors? Is it compelling? Does it lead your visitor through the sales process? Have others review, critique and edit your copy to insure it is delivering the intended message. Always double check your spelling and grammar.

These eight, simple rules will go a long way toward an effective website design, and most importantly, turning visitors into customers.

Wednesday, February 27, 2008

Google’s Tag To Remove Content Spamming

Content spamming, in its simplest firm, is the taking of content from other sites that rank well on the search engines, and then either using it as-it-is or using a utility software like Articlebot to scramble the content to the point that it can’t be detected with plagiarism software. In either case, your good, search-engine- friendly content is stolen and used, often as part of a doorway page, to draw the attention of the search engines away from you.

Everyone has seen examples of this: the page that looks promising but contains lists of terms (like term term paper term papers term limits) that link to other similar lists, each carrying Google advertising. Or the site that contains nothing but content licensed from Wikipedia. Or the site that plays well in a search but contains nothing more than SEO gibberish, often ripped off from the site of an expert and minced into word slaw.

These sites are created en masse to provide a fertile ground to draw eyeballs. It seems a waste of time when you receive a penny a view for even the best-paying ads but when you put up five hundred sites at a time, and you’ve figured out how to get all of them to show up on the first page or two of a lucrative Google search term, it can be surprisingly profitable.

The losers are the people who click on these pages, thinking that there is content of worth on these sites and you. Your places are stolen from the top ten by these spammers. Google is working hard to lock them out, but there is more that you can do to help Google.

Using The Antispam Tag

But there is another loser. One of the strengths of the Internet is that it allows for two-way public communication on a scale never seen before. You post a blog, or set up a wiki; your audience comments on your blog, or adds and changes your wiki.

The problem? While you have complete control over a website and its contents in the normal way of things, sites that allow for user communication remove this complete control from you and give it to your readers. There is no way to prevent readers of an open blog from posting unwanted links, except for manually removing them. Even then, links can be hidden in commas or periods, making it nearly impossible to catch everything.

This leaves you open to the accusation of link spam for links you never put out there to begin with. And while you may police the most recent several blogs you’ve posted, no one polices the ones from several years ago. Yet Google still looks at them and indexes them. By 2002, bloggers everywhere were begging Google for an ignore tag of some sort to prevent its spiders from indexing comment areas.

Not only, they said, would bloggers be grateful; everyone with two-way uncontrolled communication wikis, forums, guest books needed this service from Google. Each of these types of sites has been inundated with spam at some point, forcing some to shut down completely. And Google itself needed it to help prevent the rampant sp@m in the industry.

In 2005, Google finally responded to these concerns. Though their solution is not everything the online community wanted (for instance, it leads to potentially good content being ignored as well as sp@m), it does at least allow you to section out the parts of your blog that are public. It is the “nofollow” attribute.

“Nofollow” allows you to mark a portion of your web page, whether you’re running a blog or you want to section out paid advertising, as an area that Google spiders should ignore. The great thing about it is that not only does it keep your rankings from suffering from sp@m, it also discourages spammers from wasting your valuable comments section with their junk text.

The most basic part of this attribute involves embedding it into a hyperlink. This allows you to manually flag links, such as those embedded in paid advertising, as links Google spiders should ignore. But what if the content is user-generated? It’s still a problem because you certainly don’t have time to go through and mark all those links up.

Fortunately, blogging systems have been sensitive to this new development. Whether you use Wordpress or another blogging system, most have implemented either automated “nofollow” links in their comment sections, or have issued plugins you can implement yourself to prevent this sort of spamming.

This does not solve every problem. But it’s a great start. Be certain you know how your user-generated content system provides this service to you. In most cases, a software update will implement this change for you.

Is This Spamming And Will Google Block Me?

There’s another problem with the spamming crowd. When you’re fighting search engine sp@m and start seeing the different forms it can take and, disturbingly, realizing that some of your techniques for your legitimate site are similar you have to wonder: Will Google block me for my search engine optimization techniques?

This happened recently to BMW’s corporate site. Their webmaster, dissatisfied with the dealership’s position when web users searched for several terms (such as “new car”), created and posted a gateway page a page optimized with text that then redirects searchers to an often graphics-heavy page.

Google found it and, rightly or wrongly, promptly dropped their page rank manually to zero. For weeks, searches for their site turned up plenty of sp@m and dozens of news storiesbut to find their actual site, it was necessary to drop to the bottom of the search, not easy to do in Googleworld.

This is why you really need to understand what Google counts as search engine sp@m, and adhere to their restrictions even if everyone else doesn’t. Never create a gateway page, particularly one with spammish data. Instead, use legittmate techniques like image alternate text and actual text in your page. Look for ways to get other pages to point to your site t article submission, for instance, or directory submission. And keep your content fresh, always.

While duplicated text is often a sign of serious spammage, the Google engineers realize two things: first, the original text is probably still out there somewhere, and it’s unfair to drop that person’s rankings along with those who stole it from them; and second, certain types of duplicated text, like articles or blog entries, are to be expected.

Their answer to the first issue is to credtt the site first catalogued with a particular text as the creator, and to drop sites obviously spammed from that one down a rank. The other issue is addressed by looking at other data around the questionable data; if the entire site appears to be spammed, it, too, is dropped. Provided you are not duplicating text on many websites to fraudulently increase your ranking, you’re safe. Ask yourself: are you using the same content on several sites registered to you in order to maximize your chances of being read? If the answer is yes, this is a bad idea and will be classified as spamdexing. If your content would not be useful to the average Internet surfer, it is also likely to be classed as spamdexing.

There is a very thin line between search engine optimization and spam indexing. You should become very familiar with it. Start with understanding hidden/invisible text, keyword stuffing, metatag stuffing, gateway pages, and scraper sites.

Wednesday, February 20, 2008

Black-Hat SEO Grey-Hat SEO White-Hat SEO Tactics Black-Hat SEO Tactics

Keyword Stuffing
This is probably one of the most commonly abused forms of search engine spam. Essentially this is when a webmaster or SEO places a large number of instances of the targeted keyword phrase in hopes that the search engine will read this as relevant. In order to offset the fact that this text generally reads horribly it will often be placed at the bottom of a page and in a very small font size. An additional tactic that is often associated with this practice is hidden text which is commented on below.
Hidden Text
Hidden text is text that is set at the same color as the background or very close to it. While the major search engines can easily detect text set to the same color as a background some webmasters will try to get around it by creating an image file the same color as the text and setting the image file as the background. While undetectable at this time to the search engines this is blatant spam and websites using this tactic are usually quickly reported by competitors and the site blacklisted.
Cloaking
In short, cloaking is a method of presenting different information to the search engines than a human visitor would see. There are too many methods of cloaking to possibly list here and some of them are still undetectable by the search engines. That said, which methods still work and how long they will is rarely set-in-stone and like hidden text, when one of your competitors figures out what is being done (and don’t think they aren’t watching you if you’re holding one of the top search engine positions) they can and will report your site and it will get banned.
Doorway Pages
Doorway pages are pages added to a website solely to target a specific keyword phrase or phrases and provide little in the way of value to a visitor. Generally the content on these pages provide no information and the page is only there to promote a phrase in hopes that once a visitor lands there, that they will go to the homepage and continue on from there. Often to save time these pages are generated by software and added to a site automatically. This is a very dangerous practice. Not only are many of the methods of injecting doorway pages banned by the search engines but a quick report to the search engine of this practice and your website will simply disappear along with all the legitimate ranks you have attained with your genuine content pages.
Redirects
Redirecting, when used as a black-hat tactic, is most commonly brought in as a compliment to doorway pages. Because doorway pages generally have little or no substantial content, redirects are sometime applied to automatically move a visitor to a page with actual content such as the homepage of the site. As quickly as the search engines find ways of detecting such redirects, the spammers are uncovering ways around detection. That said, the search engines figure them out eventually and your site will be penalized. That or you’ll be reported by a competitor or a disgruntled searcher.
Duplicate Sites
A throwback tactic that rarely works these days. When affiliate programs became popular many webmasters would simply create a copy of the site they were promoting, tweak it a bit, and put it online in hopes that it would outrank the site it was promoting and capture their sales. As the search engines would ideally like to see unique content across all of their results this tactic was quickly banned and the search engines have methods for detecting and removing duplicate sites from their index. If the site is changed just enough to avoid automatic detection with hidden text or the such, you can once again be reported to the search engines and be banned that way.

Interlinking
As incoming links became more important for search engine positioning the practice of building multiple websites and linking them together to build the overall link popularity of them all became a common practice. This tactic is more difficult to detect than others when done “correctly” (we cannot give the method for “correct” interlinking here as it’s still undetectable at the time of this writing and we don’t want to provide a means to spam engines). This tactic is difficult to detect from a user standpoint unless you end up with multiple sites in the top positions on the search engines in which case it is likely that you will be reported.
Grey-Hat SEO Tactics
The following tactics fall in the grey area between legitimate tactics and search engine spam. They include tactics such as cloaking, paid links, duplicate content and a number of others. Unless you are on the correct side of this equation these tactics are not recommended. Remember: even if the search engines cannot detect these tactics when they are used as spam, your competitors will undoubtedly be on the lookout and report your site to the engines in order to eliminate you from the competition.
It is definitely worth noting that, while it may be tempting to enlist grey-hat and black-hat SEO tactics in order to rank well, doing so stands a very good chance of getting your website penalized. There are legitimate methods for ranking a website well on the search engines. It is highly recommended that webmasters and SEO’s put in the extra time and effort to properly rank a website well, insuring that the site will not be penalized down the road or even banned from the search engines entirely.
Grey-Hat SEO Tactics:
Cloaking
There are times when cloaking is considered a legitimate tactic by users and search engines alike. Basically, if there is a logical reason why you should be allowed to present different information to the search engines than the visitor (if you have content behind a “members only” area for example) you are relatively safe. Even so, this tactic is very risky and it is recommended that you contact each search engine, present your reasoning, and allow them the opportunity to approve it’s use.
Arguably, another example of a site legitimately using cloaking, is when the site is mainly image-based such as an art site. In this event, provided that the text used to represent the page accurately defines the page and image(s) on it, this could be considered a legitimate use of cloaking. As cloaking has often been abused, if other methods such as adding visible text to the page is possible it is recommended. If there are no other alternatives it is recommended that you contact the search engine prior to adding this tactic and explain your argument.
There is more information on cloaking on our black-hat SEO tactics page.
Paid Links
The practice of purchasing link on websites solely for the increase in link-popularity that it can mean has grown steadily over the last year-or-so with link auction sites such as LinkAdage making this practice easier. (You can read more about LinkAdage on our SEO resources page.
When links are purchased as pure advertising the practice is considered legitimate, while the practice of purchasing links only for the increase in link-popularity is considered an abuse and efforts will be made to either discount the links or penalize the site (usually the sellers though not always).
As a general rule, if you are purchasing links you should do so for the traffic that they will yield and consider any increase in link-popularity to be an “added bonus”.
Duplicate Content
Due primarily to the increase in popularity of affiliate programs, duplicate content on the web has become an increasingly significant problem for both search engines and search engine users alike with the same or similar sitesdominating the top positions in the search engine results pages.
To address this problem many search engines have added filters that seek out pages with the same or very similar content and eliminate the duplicate. Even at times when the duplicate content is not detected by the search engines it is often reported by competitors and the site’s rankings penalized.
There are times when duplicate content is considered legitimate by both search engines and visitors and that is on resource sites. A site that consists primarily as an index of articles on a specific subject-matter will not be penalized by posting articles that occur elsewhere on the net, though the weight it may be given as additional content will likely not be as high as a page of unique content.
White-Hat SEO Tactics:
Internal Linking

By far one of the easiest ways to stop your website from ranking well on the search engines is to make it difficult for search engines to find their way through it. Many sites use some form of script to enable fancy drop-down navigation, etc. Many of these scripts cannot be crawled by the search engines resulting in unindexed pages.
While many of these effects add visual appeal to a website, if you are using scripts or some other form of navigation that will hinder the spidering of your website it is important to add text links to the bottom of at least your homepage linking to all you main internal pages including a sitemap to your internal pages.
Reciprocal Linking
Exchanging links with other webmasters is a good way (not the best, but good) of attaining additional incoming links to your site. While the value of reciprocal links has declined a bit over the past year they certainly still do have their place.
A VERY important note is that if you do plan on building reciprocal links it is important to make sure that you do so intelligently. Random reciprocal link building in which you exchange links with any and virtually all sites that you can will not help you over the long run. Link only to sites that are related to yours and who’s content your visitors will be interested in and preferably which contain the keywords that you want to target. Building relevancy through association is never a bad thing unless you’re linking to bad neighborhoods (penalized industries and/or websites).
If you are planning or currently do undertake reciprocal link building you know how time consuming this process can be. An useful tool that can speed up the process is PRProwler. Essentially this tool allows you to find related sites with high PageRank, weeding out many of the sites that would simply be a waste of time to even visit. You can read more about PRProwler on our search engine positioning tools page.
Content Creation
Don’t confuse “content creation” with doorway pages and the such. When we recommend content creation we are discussing creating quality, unique content that will be of interest to your visitors and which will add value to your site.
The more content-rich your site is the more valuable it will appear to the search engines, your human visitors, and to other webmasters who will be far more likely to link to your website if they find you to be a solid resource on their subject.
Creating good content can be very time-consuming, however it will be well worth the effort in the long run. As an additional bonus, these new pages can be used to target additional keywords related to the topic of the page.
Writing For Others
You know more about your business that those around you so why not let everyone know? Whether it be in the form of articles, forum posts, or a spotlight piece on someone else’s website, creating content that other people will want to read and post on their sites is one of the best ways to build links to your website that don’t require a reciprocal link back.

Site Optimization

The manipulation of your content, wording, and site structure for the purpose of attaining high search engine positioning is the backbone of SEO and the search engine positioning industry. Everything from creating solid title and meta tags to tweaking the content to maximize it’s search engine effectiveness is key to any successful optimization effort.
That said, it is of primary importance that the optimization of a website not detract from the message and quality of content contained within the site. There’s no point in driving traffic to a site that is so poorly worded that it cannot possibly convey the desired message and which thus, cannot sell. Site optimization must always take into account the maintenance of the salability and solid message of the site while maximizing it’s exposure on the search engines.

Friday, January 18, 2008

Content Rich Designing

We have all heard the term content-rich, but what does content rich really mean?

Content rich means different things for different individuals, because what one person finds useful, another may not. Content rich is all about providing information that is considered valuable to your target audience. Information that visitors might find useful could consist of product or industry facts, statistics, reviews, tutorials, or educational information related to a specific industry.

How to Design Your Website Content Rich

When designing a content rich website, do not be afraid to think outside of the box. Unique ideas will generally garner more attention than the mundane and more common content concepts. Over the years the unique content that has garnered the most attention, the Subservient Chicken and JibJab, may not be appropriate for a business website, there are still lots of “out of the box” things that you can do.

Here are some ideas on how you can build content for your website that will attract website visitors.

Calendar of Events

If you website appeals to a specific audience manage and maintain a calendar of events the events should relate to a specific region or topic.

Ex. Hawaii Local Events - http://calendar.gohawaii.com/ (regional) or ex. Librarian Events - http://www.infotoday.com/calendar.shtml (topic specific events)

Sponsorships and Contests

Conducting a contest is a great way to generate interest and incoming links, everyone wants to win and in order to garner votes many competitors will tell their audience about contests and voting options.

More

Sunday, December 30, 2007

Tips for Building SEO Content

While it’s great to have a web site optimized and performing well in the engines, you need to build out content on a consistent basis. Managing growth without upsetting your existing SEO efforts can often be a challenge. With these challenges in mind, here are my top ten tips for building site content while focusing on SEO opportunities.

Tip 1 — Identify New Keyword Markets

If you are pleased with how your existing content is performing, you need to tap popular databases and see what other markets exist. Using tools like Google adwords keyword tool, overture, WordTracker and Keyword Discovery, you can quickly locate new areas relative to your industry or niche that also have a search history associated with them.

Tip 2 — Exploring Analytics

SEO is as much about delivering targeted traffic as it is about rankings, right? If you’re with me on that, start checking your analytics. In particular, explore site paths and conversions relative to referring search phrases. Many times you will find that what you think are your money terms, are actually just pushing in unproductive traffic.
The information available in your analytics package can make or break everything for you. Building new content is always a great idea; When you go about it blindly, your efforts are often un-concentrated. If you take the time to identify visitor trends and habits on a keyword level though — you can then focus on building new content that puts more visitors to work for your business goals.

Tip 3 — Maintain Your Approach

Have you ever been browsing a company’s web site reading up on various services, when suddenly you’re slapped in the face by content that just doesn’t “fit”?
As more content is written, it becomes critical for the tone and approach of your writing to be consistent. Managing this in groups can be difficult at best, so if your content is scaled in this manner — consider having one consistent editor.

Tip 4 — Write for People, not Engines

I hate that this tip sounds like something out of Google’s Webmaster Guidelines… But, it’s true. Imagine if I decided to write this article of Quick Tips for Building SEO Content methods in such a manner that you were repeatedly hit by keyword saturation levels that were through the roof.

Friday, December 14, 2007

Process of website indexing by Google & other Search Engines

There is a lot of speculation about how search engines index websites. The topic is shrouded in mystery about exact working of search engine indexing process since most search engines offer limited information about how they architect the indexing process. Webmasters get some clues by checking their log reports about the crawler visits but are unaware of how the indexing happens or which pages of their website were really crawled.

While the speculation about search engine indexing process may continue, here is a theory, based on experience, research and clues, about how they may be going about indexing 8 to 10 billion web pages even so often or the reason why there is a delay in showing up newly added pages in their index. This discussion is centered around Google, but we believe that most popular search engines like Yahoo and MSN follow a similar pattern.

* Google runs from about 10 Internet Data Centers (IDCs), each having 1000 to 2000 Pentium-3 or Pentium-4 servers running Linux OS.
* Google has over 200 (some think ‘over 1000′) crawlers / bots scanning the web each day. These do not necessarily follow an exclusive pattern, which means different crawlers may visit the same site on the same day, not knowing other crawlers have been there before. This is what probably gives a ‘daily visit’ record in your traffic log reports, keeping web masters very happy about their frequent visits.
* Some crawlers’ jobs are only to grab new URLs (lets call them ‘URL Grabbers’ for convenience) - The URL grabbers grab links & URLs they detects on various websites (including links pointing to your site) and old/new URL’s it detects on your site. They also capture the ‘date stamp’ of files when they visit your website, so that they can identify ‘new content’ or ‘updated content’ pages. The URL grabbers respect your robots.txt file & Robots Meta Tags so that they can include / exclude URLs you want / do not want indexed. (Note: same URL with different session IDs are recorded as different ‘unique’ URLs. For this reason, session ID’s are best avoided, otherwise they can be misled as duplicate content. The URL grabbers spend very little time & bandwidth on your website, since their job is rather simple. However, just so you know, they need to scan 8 to 10 Billion URLs on the web each month. Not a petty job in itself, even for 1000 crawlers.

* The URL grabbers write the captured URL’s with their date stamps and other status in a ‘Master URL List’ so that these can be deep-indexed by other special crawlers.
* The master list is then processed and classified somewhat like -
a) New URLs detected
b) Old URLs with new date stamp
c) 301 & 302 redirected URLs
d) Old URLs with old date stamp
e) 404 error URLs
f) Other URLs
* The real indexing is done by (what we’re calling) ‘Deep Crawlers’. A deep crawler’s job is to pick up URLs from the master list and deep crawl each URL and capture all the content - text, HTML, images, flash etc.
* Priority is given to ‘Old URLs with new date stamp’ as they relate to already indexed but updated content. ‘301 & 302 redirected URLs’ come next in priority followed by ‘New URLs detected’. High priority is given to URLs whose links appear on several other sites. These are classified as ‘important’ URLs. Sites and URL’s whose date stamp and content changes on a daily or hourly basis are ’stamped’ as ‘News’ sites which are indexed hourly or even on minute-by-minute basis.
* Indexing of ‘Old URLs with old date stamp’ and ‘404 error URLs’ are altogether ignored. There is no point wasting resources indexing ‘Old URLs with old date stamp’, since the search engine already has the content indexed, which is not yet updated. ‘404 error URLs’ are URLs collected from various sites but are broken links or error pages. These URLs do not show any content on them.
* The ‘Other URLs’ may contain URLs which are dynamic URLs, have session IDs, PDF documents, Word documents, PowerPoint presentations, Multimedia files etc. Google needs to further process these and assess which ones are worth indexing and to what depth. It perhaps allocates indexing task of these to ‘Special Crawlers’.
* When Google ’schedules’ the ‘Deep Crawlers’ to index ‘New URLs’ and ‘301 & 302 redirected URLs’, just the URLs (not the descriptions) start appearing in search engines result pages when you run the search “site:www.domain.com” in Google.

* Since Deep Crawlers need to crawl ‘Billions’ of web pages each month, they take as many as 4 to 8 weeks to index even updated content. New URL’s may take longer to index.
* Once the Deep Crawlers index the content, it goes into their originating IDCs. Content is then processed, sorted and replicated (synchronized) to the rest of the IDCs. A few years back, when the data size was manageable, this data synchronization used to happen once a month, lasting for 5 days, called ‘Google Dance’. Nowadays, the data synchronization happens constantly, which some people call ‘Everflux’
* When you hit www.google.com from your browser, you can land at any of their 10 IDCs depending upon their speed and availability. Since the data at any given time is slightly different at each IDC, you may get different results at different times or on repeated searches of the same term (Google Dance).
* Bottom line is that one needs to wait for as long as 8 to 12 weeks, to see full indexing in Google. One should consider this as ‘cooking time’ in ‘Google’s kitchen’. Unless you can increase the ‘importance’ of your web pages by getting several incoming links from good sites, there is no way to speed up the indexing process, unless you personally know Sergey Brin & Larry Page, and have a significant influence over them.
* Dynamic URLs may take longer to index (sometimes they do not get indexed at all) since even a small data can create unlimited URLs, which can clutter Google index with duplicate content.

Summary & Advise:

1. Ensure that you have cleared all roadblocks for crawlers and they can freely visit your site and capture all URLs. Help crawlers by creating good interlinking and sitemaps on your website.
2. Get lots of good incoming links to your pages from other websites to improve the ‘importance’ of your web pages. There is no special need to submit your website to search engines. Links to your website on other websites are sufficient.
3. Patiently wait for 4 to 12 weeks for the indexing to happen.

Disclaimer: The actual functioning and exact architecture of the search engines may vary but in essence, this is what we believe they do.