Startseite
  Archiv
  Gästebuch
  Kontakt
 


http://myblog.de/suchmaschine

Gratis bloggen bei
myblog.de





white hat seo - black hat seo

Search engine optimization (SEO) is a set of methods aimed at improving the ranking of a website in search engine listings, and could be considered a subset of search engine marketing. The term SEO also refers to "search engine optimizers," an industry of consultants who carry out optimization projects on behalf of clients' sites. Some commentators, and even some SEOs, break down methods used by practitioners into categories such as "white hat SEO" (methods generally approved by search engines, such as building content and improving site quality), or "black hat SEO" (tricks such as cloaking and spamdexing). White hatters say that black hat methods are an attempt to manipulate search rankings unfairly. Black hatters counter that all SEO is an attempt to manipulate rankings, and that the particular methods one uses to rank well are irrelevant.

Search engines display different kinds of listings in the search engine results pages (SERPs), including: pay per click advertisements, paid inclusion listings, and organic search results. SEO is primarily concerned with advancing the goals of a website by improving the number and position of its organic search results for a wide variety of relevant keywords. SEO strategies may increase both the number and quality of visitors. Search engine optimization is sometimes offered as a stand-alone service, or as a part of a larger marketing effort, and can often be very effective when incorporated into the initial development and design of a site.

For competitive, high-volume search terms (like a Webkatalog), the cost of pay per click advertising can be substantial. Ranking well in the organic search results can provide the same targeted traffic at a potentially significant savings. Site owners may choose to optimize their sites for organic search, if the cost of optimization is less than the cost of advertising.

Not all sites have identical goals for search optimization. Some sites and directory seek any and all traffic, and may be optimized to rank highly for common search phrases. A broad search optimization strategy can work for a site that has broad interest, such as a periodical, a directory, Webkatalog, or site that displays advertising with a CPM revenue model. In contrast, many businesses try to optimize their sites for large numbers of highly specific keywords that indicate readiness to buy. Overly broad search optimization can hinder marketing strategy by generating a large volume of low-quality inquiries that cost money to handle, yet result in little business. Focusing on desirable traffic generates better quality sales leads, resulting in more sales. Search engine optimization can be very effective when used as part of a smart nice marketing strategy.

=====================================================

Webmasters and content providers began optimizing sites for search engines in the mid-1990s, as the first search engines were webalizer cataloging the early Web. Initially, all a webmaster needed to do was submit a site to the various engines which would run spiders, programs to "crawl" the site, and store the collected data. The default search-bracket was to scan an entire webpage for so-called related search words, so a page with many different words matched more searches, and a webpage containing a dictionary-type listing would match almost all searches, limited only by unique names. The search engines then sorted the information by topics, and served results based on pages they had crawled. As the number of documents online kept growing, and more webmasters realized the value of organic search listings, some popular search engines began to sort their listings so they could display the most relevant pages first. This was the start of a friction between search engine and webmasters that continues to this day.

At first search engines were guided by the webmasters themselves. Early versions of search algorithms relied on webmaster-provided information such as category and keyword meta tags, or index ticker files in engines like ALIWEB. Meta-tags provided a guide to each page's content. When some webmasters began to abuse meta tags, causing their pages to rank for irrelevant searches, search engines abandoned their consideration of meta tags and instead developed more complex ranking algorithms, taking into account factors that elevated a limited number of words (anti-dictionary) and were more diverse, including:

Text within the title tag
Domain or Subdomain name
URL directories and file names
HTML tags: headings, emphasized and strongly emphasized text
Term frequency, both in the document and globally, often misunderstood and mistakenly referred to as Keyword density
Keyword proximity
Keyword adjacency
Keyword sequence
Alt attributes for images
Text within NOFRAMES tags
Content development

Pringle, et al. (Pringle et al., 1998), also defined a number of attributes within the HTML source of a page which were often manipulated by web content providers attempting to rank well in search engines. But by relying so extensively on factors that were still within the webmasters' exclusive control, search engines continued to suffer from abuse and ranking manipulation. In order to provide better results to their users, search engines had to adapt to ensure their SERPs showed the most relevant search results, rather than useless pages stuffed with numerous keywords by unscrupulous webmasters using a bait-and-switch lure to display unrelated webpages. This led to the rise of a new kind of search engine

======================================================

Google was started by two PhD students at Stanford University, Sergey Brin and Larry Page, and brought a new concept to evaluating web pages. This concept, called PageRank, has been important to the Google algorithm from the start. PageRank relies heavily on incoming links and uses the logic that each link to a page is a vote for that page's value. The more incoming links a page had the more "worthy" it is. The value of each incoming link itself varies directly based on the PageRank of the page it comes from and inversely on the number of outgoing links on that page.

With help from PageRank, Google proved to be very good at serving relevant results. Google became the most popular and successful search engine. Because PageRank measured an off-site factor, Google felt it would be more difficult to manipulate than on-page factors.

However, webmasters had already developed link building tools and schemes to influence the Inktomi search engine. These methods proved to be equally applicable to Google's algorithm. Many sites focused on exchanging, buying, and selling links on a massive scale. PageRank's reliance on the link as a vote of confidence in a page's value was undermined as many webmasters sought to garner links purely to influence Google into sending them more traffic, irrespective of whether the link was useful to human site visitors.

Further complicating the situation, the default search-bracket was still to scan an entire smilie webpage or subdomains for so-called related search-words, and a webpage containing a dictionary-type listing would still match almost all searches (except special names) at an even higher priority given by link-rank. Dictionary pages and link schemes could severely skew search results.

It was time for Google -- and other search engines -- to look at a wider range of off-site factors. There were other reasons to develop more intelligent algorithms. The Internet was reaching a vast population of non-technical users who were often unable to use advanced querying techniques to reach the information they were seeking and the sheer volume and complexity of the indexed data was vastly different from that of the early days. Search engines had to develop predictive, semantic, linguistic and heuristic algorithms. Around the same time as the work that led to Google, IBM had begun work on the Clever Project, and Jon Kleinberg was developing the HITS algorithm.

A proxy for the PageRank metric is still displayed in the Google Toolbar, though the displayed value is rounded to be an integer, and the data updated infrequently, so it is likely to be outdated. For these reasons, and the fact that PageRank is only one of more than 100 "signals" that Google considers in ranking pages, experienced SEOs recommend ignoring the displayed PageRank.

Today, most search engines keep their methods and ranking algorithms secret, to compete for finding the most valuable search-results and to deter spam pages from clogging those results. A search engine may use hundreds of factors in ranking the listings on its SERPs; the factors themselves and the weight each carries may change continually. Algorithms can differ widely: a webpage that ranks #1 in a particular search engine could rank #200 in another search engine.

Google, Yahoo and MSN do not disclose the algorithms they use to rank pages. Some SEOs have carried out controlled experiments to gauge the effects of different approaches to search optimization. Based on these experiments, often shared through online forums and blogs, professional SEOs form a consensus on what methods work best.
25.8.06 15:03
 


bisher 0 Kommentar(e)     TrackBack-URL

Name:
Email:
Website:
E-Mail bei weiteren Kommentaren
Informationen speichern (Cookie)



 Smileys einfügen



Verantwortlich für die Inhalte ist der Autor. Dein kostenloses Blog bei myblog.de! Datenschutzerklärung
Werbung