Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Search engine optimization
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== History == [[Webmaster]]s and content providers began optimizing websites for search engines in the mid-1990s, as the first search engines were cataloging the early [[World Wide Web|Web]]. Initially, webmasters submitted the address of a page, or [[Uniform Resource Locator|URL]] to the various search engines, which would send a [[web crawler]] to ''crawl'' that page, extract links to other pages from it, and return information found on the page to be [[Index (search engine)|indexed]].<ref>{{cite web| url=http://www.thinkpink.com/bp/Thesis/Thesis.pdf| title=Finding What People Want: Experiences with the WebCrawler| access-date=May 7, 2007| publisher=The Second International WWW Conference Chicago, USA, October 17β20, 1994| author=Brian Pinkerton| archive-date=May 8, 2007| archive-url=https://web.archive.org/web/20070508124837/http://www.thinkpink.com/bp/Thesis/Thesis.pdf| url-status=live}}</ref> According to a 2004 article by former industry analyst and current [[Google]] employee [[Danny Sullivan (technologist)|Danny Sullivan]], the phrase "search engine optimization" came into use in 1997. Sullivan credits SEO practitioner Bruce Clay as one of the first people to popularize the term.<ref>{{cite web|url=http://forums.searchenginewatch.com/showpost.php?p=2119&postcount=10|title=Who Invented the Term "Search Engine Optimization"?|author=Danny Sullivan|date=June 14, 2004|publisher=[[Search Engine Watch]]|archive-url=https://web.archive.org/web/20100423051708/http://forums.searchenginewatch.com/showpost.php?p=2119|archive-date=23 April 2010|access-date=May 14, 2007}} See [https://groups.google.com/group/alt.current-events.net-abuse.spam/browse_thread/thread/6fee2777dc17b8ab/3858bff94e56aff3?lnk=st&q=%22search+engine+optimization%22&rnum=1#3858bff94e56aff3 Google groups thread] {{Webarchive|url=https://web.archive.org/web/20130617012709/http://groups.google.com/group/alt.current-events.net-abuse.spam/browse_thread/thread/6fee2777dc17b8ab/3858bff94e56aff3?lnk=st&q=%22search+engine+optimization%22&rnum=1#3858bff94e56aff3 |date=June 17, 2013 }}.</ref> Early versions of search [[algorithm]]s relied on webmaster-provided information such as the keyword [[meta tag]] or index files in engines like [[Aliweb|ALIWEB]]. Meta tags provide a guide to each page's content. Using metadata to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content. Flawed data in meta tags, such as those that were inaccurate or incomplete, created the potential for pages to be mischaracterized in irrelevant searches.<ref>{{Citation|chapter=The Challenge is Open|date=2020-11-17|title=Brain vs Computer|pages=189β211|publisher=WORLD SCIENTIFIC|doi=10.1142/9789811225017_0009|isbn=978-981-12-2500-0|s2cid=243130517}}</ref>{{dubious|date=October 2012}} Web content providers also manipulated attributes within the [[HTML]] source of a page in an attempt to rank well in search engines.<ref>{{cite web |url=http://www.csse.monash.edu.au/~lloyd/tilde/InterNet/Search/1998_WWW7.html |title=What is a tall poppy among web pages? |date=April 1998 |website=Monash University |access-date=May 8, 2007 |author=Pringle, G. |author2=Allison, L. |author3=Dowe, D. |archive-date=April 27, 2007 |archive-url=https://web.archive.org/web/20070427161650/http://www.csse.monash.edu.au/~lloyd/tilde/InterNet/Search/1998_WWW7.html}}</ref> By 1997, search engine designers recognized that webmasters were making efforts to rank in search engines and that some webmasters were [[spamdexing|manipulating their rankings]] in search results by stuffing pages with excessive or irrelevant keywords. Early search engines, such as [[Altavista]] and [[Infoseek]], adjusted their algorithms to prevent webmasters from manipulating rankings.<ref name="infoseeknyt">{{cite news|url=https://query.nytimes.com/gst/fullpage.html?res=940DE0DF123BF932A25752C1A960958260|title=Desperately Seeking Surfers|date=November 11, 1996|newspaper=New York Times|author=Laurie J. Flynn|access-date=May 9, 2007|archive-date=October 30, 2007|archive-url=https://web.archive.org/web/20071030131226/http://query.nytimes.com/gst/fullpage.html?res=940DE0DF123BF932A25752C1A960958260|url-status=live}}</ref> By relying on factors such as [[keyword density]], which were exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure their [[Search engine results page|results page]]s showed the most relevant search results, rather than unrelated pages with numerous keywords by unscrupulous webmasters. This meant moving away from heavy reliance on term density to a more holistic process for scoring semantic signals.<ref name="Forbes">{{cite magazine|url=https://www.forbes.com/sites/jaysondemers/2016/01/20/is-keyword-density-still-important-for-seo/2/#2ef69ba36733|title=Is Keyword Density Still Important for SEO|author=Jason Demers|date=January 20, 2016|magazine=Forbes|access-date=August 15, 2016|archive-date=August 16, 2016|archive-url=https://web.archive.org/web/20160816221641/http://www.forbes.com/sites/jaysondemers/2016/01/20/is-keyword-density-still-important-for-seo/2/#2ef69ba36733|url-status=live}}</ref> Search engines responded by developing more complex [[Search algorithm|ranking algorithms]], taking into account additional factors that were more difficult for webmasters to manipulate.{{Citation needed|date=January 2025}} Some search engines have also reached out to the SEO industry and are frequent sponsors and guests at SEO conferences, webchats, and seminars. Major search engines provide information and guidelines to help with website optimization.<ref name="g-wmguide" /><ref name="ms-wmguide" /> Google has a [[Sitemaps]] program to help webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website.<ref name="googlesitemaps">{{cite web|url=https://developers.google.com/search/docs/crawling-indexing/sitemaps/overview|title=Sitemaps|access-date=July 4, 2012|archive-date=June 22, 2023|archive-url=https://web.archive.org/web/20230622175619/https://developers.google.com/search/docs/crawling-indexing/sitemaps/overview|url-status=live}}</ref> [[Bing Webmaster Center|Bing Webmaster Tools]] provides a way for webmasters to submit a sitemap and web feeds, allows users to determine the "crawl rate", and track the web pages index status. In 2015, it was reported that [[Google]] was developing and promoting mobile search as a key feature within future products. In response, many brands began to take a different approach to their Internet marketing strategies.<ref>{{Cite web |url=https://www.startupgrind.com/blog/mobile-is-the-internet-for-consumers/ |title="By the Data: For Consumers, Mobile is the Internet" ''Google for Entrepreneurs Startup Grind'' September 20, 2015. |access-date=January 8, 2016 |archive-date=January 6, 2016 |archive-url=https://web.archive.org/web/20160106040341/https://www.startupgrind.com/blog/mobile-is-the-internet-for-consumers/ |url-status=live }}</ref> ===Relationship with Google=== In 1998, two graduate students at [[Stanford University]], [[Larry Page]] and [[Sergey Brin]], developed "Backrub", a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, [[PageRank]], is a function of the quantity and strength of [[inbound link]]s.<ref name="lgscalehyptxt">{{cite web|author1=Brin, Sergey|author2=Page, Larry|name-list-style=amp|url=http://www-db.stanford.edu/~backrub/google.html|title=The Anatomy of a Large-Scale Hypertextual Web Search Engine|publisher=Proceedings of the seventh international conference on World Wide Web|year=1998|pages=107β117|access-date=May 8, 2007|archive-date=October 10, 2006|archive-url=https://web.archive.org/web/20061010084452/http://www-db.stanford.edu/~backrub/google.html|url-status=live}}</ref> PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random web surfer. Page and Brin founded Google in 1998.<ref>{{cite web|title=Co-founders of Google - Google's co-founders may not have the name recognition of say, Bill Gates, but give them time: Google hasn't been around nearly as long as Microsoft. |website=Entrepreneur |url=http://www.entrepreneur.com/article/197848|date=2008-10-15|access-date=May 30, 2014|archive-date=May 31, 2014|archive-url=https://web.archive.org/web/20140531124147/http://www.entrepreneur.com/article/197848|url-status=live}}</ref> Google attracted a loyal following among the growing number of [[Internet]] users, who liked its simple design.<ref name="bbc-1">{{cite news|author=Thompson, Bill|url=http://news.bbc.co.uk/1/hi/technology/3334531.stm|title=Is Google good for you?|work=BBC News|date=December 19, 2003|access-date=May 16, 2007|archive-date=January 25, 2009|archive-url=https://web.archive.org/web/20090125130328/http://news.bbc.co.uk/1/hi/technology/3334531.stm|url-status=live}}</ref> Off-page factors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency, [[meta tags]], headings, links and site structure) to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings. Although PageRank was more difficult to [[Gaming the system|game]], webmasters had already developed link-building tools and schemes to influence the [[Inktomi]] search engine, and these methods proved similarly applicable to gaming PageRank. Many sites focus on exchanging, buying, and selling links, often on a massive scale. Some of these schemes involved the creation of thousands of sites for the sole purpose of [[spamdexing|link spamming]].<ref>{{cite web|author1=Zoltan Gyongyi|author2=Hector Garcia-Molina|name-list-style=amp|url=http://infolab.stanford.edu/~zoltan/publications/gyongyi2005link.pdf|title=Link Spam Alliances|publisher=Proceedings of the 31st VLDB Conference, Trondheim, Norway|year=2005|access-date=May 9, 2007|archive-date=June 12, 2007|archive-url=https://web.archive.org/web/20070612023948/http://infolab.stanford.edu/~zoltan/publications/gyongyi2005link.pdf|url-status=live}}</ref> By 2004, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation.<ref name="nyt0607">{{cite news|newspaper=New York Times|access-date=June 6, 2007|url=https://www.nytimes.com/2007/06/03/business/yourmoney/03google.html|title=Google Keeps Tweaking Its Search Engine|date=June 3, 2007|first=Saul|last=Hansell|archive-date=November 10, 2017|archive-url=https://web.archive.org/web/20171110133529/https://www.nytimes.com/2007/06/03/business/yourmoney/03google.html|url-status=live}}</ref> The leading search engines, Google, [[Bing (search engine)|Bing]], and [[Yahoo]], do not disclose the algorithms they use to rank pages. Some SEO practitioners have studied different approaches to search engine optimization and have shared their personal opinions.<ref>{{cite web |first=Danny |last=Sullivan |url=https://www.searchenginewatch.com/2005/09/29/rundown-on-search-ranking-factors/ |title=Rundown On Search Ranking Factors |publisher=[[Search Engine Watch]] |date=September 29, 2005 |access-date=May 8, 2007 |author-link=Danny Sullivan (technologist) |url-status=live |archive-url=https://web.archive.org/web/20070528133132/http://blog.searchenginewatch.com/blog/050929-072711 |archive-date=May 28, 2007 }}</ref> Patents related to search engines can provide information to better understand search engines.<ref>{{cite web|author=Christine Churchill|url=http://searchenginewatch.com/showPage.html?page=3564261|title=Understanding Search Engine Patents|publisher=[[Search Engine Watch]]|date=November 23, 2005|access-date=May 8, 2007|archive-url=https://web.archive.org/web/20070207222630/http://searchenginewatch.com/showPage.html?page=3564261|archive-date=February 7, 2007|df=mdy-all}}</ref> In 2005, Google began personalizing search results for each user. Depending on their history of previous searches, Google crafted results for logged in users.<ref>{{cite web|url=http://searchenginewatch.com/3563036|title=Google Personalized Search Leaves Google Labs|work=searchenginewatch.com|publisher=Search Engine Watch|archive-url=https://web.archive.org/web/20090125065500/https://www.searchenginewatch.com/3563036|archive-date=January 25, 2009|access-date=September 5, 2009}}</ref> In 2007, Google announced a campaign against paid links that transfer PageRank.<ref>{{cite web|url=https://www.searchenginejournal.com/8-things-we-learned-about-google-pagerank/5897/|title=8 Things We Learned About Google PageRank|date=October 25, 2007|publisher=www.searchenginejournal.com|access-date=August 17, 2009|archive-date=August 19, 2009|archive-url=https://web.archive.org/web/20090819080745/http://www.searchenginejournal.com/8-things-we-learned-about-google-pagerank/5897/|url-status=live}}</ref> On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of the [[nofollow]] attribute on links. [[Matt Cutts]], a well-known software engineer at Google, announced that Google Bot would no longer treat any no follow links, in the same way, to prevent SEO service providers from using nofollow for PageRank sculpting.<ref>{{cite web|url=https://www.mattcutts.com/blog/pagerank-sculpting/|title=PageRank sculpting|publisher=Matt Cutts|access-date=January 12, 2010|archive-date=January 6, 2010|archive-url=https://web.archive.org/web/20100106120723/http://www.mattcutts.com/blog/pagerank-sculpting/|url-status=live}}</ref> As a result of this change, the usage of nofollow led to evaporation of PageRank. In order to avoid the above, SEO engineers developed alternative techniques that replace nofollowed tags with obfuscated [[JavaScript]] and thus permit PageRank sculpting. Additionally, several solutions have been suggested that include the usage of [[HTML element#Frames|iframe]]s, [[Flash animation|Flash]], and JavaScript.<ref>{{cite web |url=http://searchengineland.com/google-loses-backwards-compatibility-on-paid-link-blocking-pagerank-sculpting-20408 |title=Google Loses "Backwards Compatibility" On Paid Link Blocking & PageRank Sculpting |date=June 3, 2009 |publisher=searchengineland.com |access-date=August 17, 2009 |archive-date=August 14, 2009 |archive-url=https://web.archive.org/web/20090814212229/http://searchengineland.com/google-loses-backwards-compatibility-on-paid-link-blocking-pagerank-sculpting-20408/ |url-status=live }}</ref> In December 2009, Google announced it would be using the web search history of all its users in order to populate search results.<ref>{{cite web|url=https://googleblog.blogspot.com/2009/12/personalized-search-for-everyone.html|title=Personalized Search for everyone|access-date=December 14, 2009|archive-date=December 8, 2009|archive-url=https://web.archive.org/web/20091208140917/http://googleblog.blogspot.com/2009/12/personalized-search-for-everyone.html|url-status=live}}</ref> On June 8, 2010 a new web indexing system called [[Google Caffeine]] was announced. Designed to allow users to find news results, forum posts, and other content much sooner after publishing than before, Google Caffeine was a change to the way Google updated its index in order to make things show up quicker on Google than before. According to Carrie Grimes, the software engineer who announced Caffeine for Google, "Caffeine provides 50 percent fresher results for web searches than our last index..."<ref>{{cite web |url=http://googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html |title=Our new search index: Caffeine |publisher=Google: Official Blog |access-date=May 10, 2014 |archive-date=June 18, 2010 |archive-url=https://web.archive.org/web/20100618160021/http://googleblog.blogspot.com/2010/06/our-new-search-index-caffeine.html |url-status=live }}</ref> [[Google Instant]], real-time-search, was introduced in late 2010 in an attempt to make search results more timely and relevant. Historically site administrators have spent months or even years optimizing a website to increase search rankings. With the growth in popularity of social media sites and blogs, the leading engines made changes to their algorithms to allow fresh content to rank quickly within the search results.<ref>{{cite web |title=Relevance Meets Real-Time Web |publisher=[[Google Blog]] |url=http://googleblog.blogspot.com/2009/12/relevance-meets-real-time-web.html |access-date=January 4, 2010 |archive-date=April 7, 2019 |archive-url=https://web.archive.org/web/20190407221454/http://googleblog.blogspot.com/2009/12/relevance-meets-real-time-web.html |url-status=live }}</ref> In February 2011, Google announced the [[Google Panda|Panda]] update, which penalizes websites containing content duplicated from other websites and sources. Historically websites have copied content from one another and benefited in search engine rankings by engaging in this practice. However, Google implemented a new system that punishes sites whose content is not unique.<ref>{{cite web|title=Google Search Quality Updates|publisher=[[Google Blog]]|url=http://googleblog.blogspot.com/2011/02/finding-more-high-quality-sites-in.html|access-date=March 21, 2012|archive-date=April 23, 2022|archive-url=https://web.archive.org/web/20220423234246/https://googleblog.blogspot.com/2011/02/finding-more-high-quality-sites-in.html|url-status=live}}</ref> The 2012 [[Google Penguin]] attempted to penalize websites that used manipulative techniques to improve their rankings on the search engine.<ref>{{cite web|title=What You Need to Know About Google's Penguin Update|work=Inc |date=June 20, 2012|publisher=[[Inc.com]]|url=http://www.inc.com/aaron-aders/what-you-need-to-know-about-googles-penguin-update.html|access-date=December 6, 2012|archive-date=December 20, 2012|archive-url=https://web.archive.org/web/20121220235821/http://www.inc.com/aaron-aders/what-you-need-to-know-about-googles-penguin-update.html|url-status=live |last1=Aders |first1=Aaron }}</ref> Although Google Penguin has been presented as an algorithm aimed at fighting web spam, it really focuses on spammy links<ref>{{Cite news|url=http://searchengineland.com/google-penguin-looks-mostly-link-source-says-google-260902|title=Google Penguin looks mostly at your link source, says Google|date=2016-10-10|work=Search Engine Land|access-date=2017-04-20|language=en-US|archive-date=April 21, 2017|archive-url=https://web.archive.org/web/20170421001835/http://searchengineland.com/google-penguin-looks-mostly-link-source-says-google-260902|url-status=live}}</ref> by gauging the quality of the sites the links are coming from. The 2013 [[Google Hummingbird]] update featured an algorithm change designed to improve Google's natural language processing and semantic understanding of web pages. Hummingbird's language processing system falls under the newly recognized term of "conversational search", where the system pays more attention to each word in the query in order to better match the pages to the meaning of the query rather than a few words.<ref>{{cite web|title=FAQ: All About The New Google "Hummingbird" Algorithm|url=https://searchengineland.com/google-hummingbird-172816|website=www.searchengineland.com|date=September 26, 2013|access-date=17 March 2018|archive-date=December 23, 2018|archive-url=https://web.archive.org/web/20181223110045/https://searchengineland.com/google-hummingbird-172816|url-status=live}}</ref> With regards to the changes made to search engine optimization, for content publishers and writers, Hummingbird is intended to resolve issues by getting rid of irrelevant content and spam, allowing Google to produce high-quality content and rely on them to be 'trusted' authors. In October 2019, Google announced they would start applying [[BERT (language model)|BERT]] models for English language search queries in the US. Bidirectional Encoder Representations from Transformers (BERT) was another attempt by Google to improve their natural language processing, but this time in order to better understand the search queries of their users.<ref>{{Cite web|title=Understanding searches better than ever before|url=https://blog.google/products/search/search-language-understanding-bert/|date=2019-10-25|website=Google|language=en|access-date=2020-05-12|archive-date=January 27, 2021|archive-url=https://web.archive.org/web/20210127042834/https://www.blog.google/products/search/search-language-understanding-bert/|url-status=live}}</ref> In terms of search engine optimization, BERT intended to connect users more easily to relevant content and increase the quality of traffic coming to websites that are ranking in the [[Search engine results page|Search Engine Results Page]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)