Skip to main content

SEO Object Question & Answer for Interview



Q.1) What is a mobile subdomain?
 

 A subdomain that the web server will send users to based on their user agent. (Your Answer)

B. subdomain that can be easily changed from one server to another.

C. subdomain that serves different results to users based on their user agent.

D. subdomain that dynamically sizes content to the screen size of the user's browser. 
 
Explanation
A mobile subdomain is one that the web server will send users to based on their user agent. In other words, the server will detect which users are visiting your site from a mobile device and direct those users to a mobile-optimized subdomain instead of the main website.

If you got this question wrong, check out this article by Google on separate URLs for mobile vs. desktop for more information. (For more information about different types of mobile configurations, please see The Definitive Guide To Technical Mobile SEO.)

Q.2) What is responsive web design?​
 

A. site design that adapts to the specific needs of the user.

B. site design that uses CSS to dynamically adapt the HTML to fit the user's device and screen size. (Your Answer)

C. site design that determines the user agent of the browser and sends different HTML sized for optimal viewing.

D. site that provides different HTML to Google from the HTML it delivers to users. 
 
Explanation
According to The Definitive Guide To Technical Mobile SEO, responsive web design, or responsive design, is a type of site implementation wherein a web page "serves basically the same content to all users but detects the device and screen size and builds the layout accordingly. As the screen size gets smaller, the page may show fewer images, less text or a simplified navigation."

If you got this question wrong, go to this article by Google on responsive design for more information.

Q.3) What does dynamic serving deliver?
 

A. site that adapts to the specific needs of the user.

B. site that serves HTML to the browser so it can dynamically adapt the content to its size.

C. site that determines the user agent of the browser and sends different HTML sized for optimal viewing. (Your Answer)

D. site that provides different HTML to Google from the HTML it delivers to users. 
 
Explanation
Dynamic serving is a method whereby the server sends different HTML to the visitor based on the user agent, with each version optimized to each particular set up. (For example, a site may dynamically serve different HTML to desktop and mobile users.) Whereas responsive design uses CSS to render the same HTML differently based on the user's device, dynamic serving serves different HTML code altogether.

If you got this question wrong, go to this article by Google on dynamic serving for more information. (For more information about different types of mobile configurations, please see The Definitive Guide To Technical Mobile SEO.)


Q.4) Of the items listed below, what is the single most important on-page SEO factor?
 
A. The number of uses of the target keyword.

B. Use of the keywords in the keywords metatag.

C. Use of the target keyword in the top heading of the page.

D. Use of the target keyword in the title tag. (Your Answer) 
 
Explanation
Using the target keyword in the title tag will be the most effective of the options shown above. If you got this question wrong, you can check out the Periodic Table of SEO Success Factors or learn more about On-Page SEO here.

Q.5) What is the most important heading tag on a page?
 

A. It doesn't matter; they are all the same.

B. The <h1> tag.

C. The highest level heading tag used on the page. (Your Answer)

D. The <h0> tag. 
 
Explanation
The most important heading tag is the highest-level tag on the page, with <h1> being the highest and <h6> being the lowest. Note that there are many in the industry who think the <h1> tag is the correct answer here, but this does not make sense in today's environment.

Tests run by Moz show that simply having a keyword in a bigger font has the same impact. This makes far more sense. Page markup is relative. So the highest level heading tag is the one that matters most.

Q.6) What is the most important SEO ranking factor related to on-page content?
 

A. The relevance and breadth of the content on the page. (Your Answer)

B. The use of keywords in the heading tags.

C. The number of times the keywords are repeated in the content.

D. Whether or not the target keyword is used in the first 50 words. 
 
Explanation
The overall relevance of content matters more than how many times you use specific keyword phrases. Yes, you want to use those phrases in key places, but focus on content quality first, and make your content relevant and valuable.

You can also learn more about this by reviewing Chapter 2 of Search Engine Land's Guide To SEO, or by getting The Art of SEO and going to the section on content optimization on page 317.

Q.7) What's the ideal length for content on a web page?
 
 A. 1,000 words or more.

B. It doesn't matter.

C. 100 words on e-commerce pages, 500 words or more on article pages.

D. Whatever is most appropriate to the topic and focus of the web page. (Your Answer) 
 
Explanation
There is no such thing as an ideal length for content. Write your content to meet the needs of the users visiting your page, and do that as well as you possibly can.

Q.8) What is keyword cannibalization?
 
 A. When you analyze competitors to discover keywords to target.

B. When multiple pages on a site are optimized for the same keywords. (Your Answer)

C. When you compete for more than one keyword on the same page.

D. When poor SEO implementation causes your page to be non-competitive for the target keywords. 
 
Explanation
When you optimize more than one page for the same target keyword(s), they end up competing with one another for rankings, making it harder for these pages to rank for the desired terms.

You can learn more about keyword cannibalization, including how to go about fixing it here.

Q.9) What is duplicate content?
 

A. When one page has the exact same content as another.

B. When you copy content from a competitor's site.

C. When there are substantive blocks of content on a web page that either completely match, or are appreciably similar to, content on another web page. (Your Answer)

D. When one page has nearly the same content as another
 
Explanation
Though A, B and D could all be considered duplicate content, C has the broadest definition and is therefore the most accurate. Any time a web page contains substantial blocks of content that appear word-for-word (or nearly word-for-word) on another web page, that can be considered duplicate content by Google.

You can learn more about how Google sees duplicate content here, or read more about duplicate content on Search Engine Land.

Q.10) What are hreflang tags?
 

A. They are used to tell search engines what language and country a website is intended to serve.

B. They are used to tell search engines what language and/or country a website is intended to serve.

C. They are used to tell search engines what language, or what language and country a website is intended to serve. (Your Answer)

D. They are used to indicate the preferred dialect of a language for a web page. 
 
Explanation
Hreflang tags are used to help search engines serve the correct language or regional URL in search results. Language can be specified by itself, or language plus country may be specified; a country code by itself will not be recognized.

Google explains hreflang tags in detail here. I also shot a video to explain how to use hreflang tags here.

Q.11) What does the Vary: User-Agent HTTP Header do?
 

A. It tells web servers that users more about what a user needs.

B. It tells users that tells a site's content varies from time to time.

C. It tells ISPs to not cache a site's content.

D. It tells caching servers that a site's content varies by user agent. (Your Answer) 
 
Explanation
Used with sites that employ dynamic serving, the Vary HTTP header signals that different content is served to different user agents, which can help Google and other search engines discover mobile content more easily.

Patrick Sexton does a great job explaining the Vary: User-Agent HTTP here. You can also see a video with my explanation here.

Q.12) How do you recover from Panda?
 

A. Fix thin and poor-quality content on the pages of your site, and wait. (Your Answer)

B. Fix thin, poor-quality and duplicate content on the pages of your site, and submit a reconsideration request.

C. Fix thin, poor-quality and duplicate content on the pages of your site, and wait.

D. Fix thin and poor-quality content on the pages of your site, and submit a reconsideration request.
 
Explanation
Panda is an algorithm focused on evaluating content quality. Google steadfastly maintains that duplicate content is not part of Panda, and since it's algorithmic, there is no value in submitting a reconsideration request.

You can read more about the Google Panda algorithm update here on Search Engine Land. If you have The Art of SEO, Chapter 9 discusses Panda in detail.


Q.13) How do you recover from Penguin?
 

A. Remove all links to your site with a Domain Authority of 50 or less, and wait.

B. Remove all links that do not appear to be editorially given, and then wait.

C. Clean up links from web directories, article directories, countries where you don't do business and where you have too much rich anchor text, and then wait. (Your Answer)

D. Remove or disavow all links that do not appear to be editorially given, and file a reconsideration request.

Explanation
Like Panda, Penguin is an algorithm update, so filing a reconsideration request is a waste of time. You have to wait for the algorithm to run again and find your changes. Penguin does not use a metric like Domain Authority or PageRank to assess value, and in my experience, it seems to target certain classes of links, such as those shown in the correct answer. Read more about the Google Penguin algorithm update here on Search Engine Land.

Q.14) How do you recover from a manual link penalty?

A. Remove all links to your site with a Domain Authority of 50 or lower, and wait.

B. Remove all links that do not appear to be editorially given, and then wait.

C. Clean up links from web directories, article directories, countries were you don't do business and where you have too much rich anchor text, and then wait.

D. Remove or disavow all links that do not appear to be editorially given, and file a reconsideration request. (Your Answer)

Explanation
Since manual penalties are not algorithmic, they are more far-ranging in the types of links they can detect, so you should remove or disavow all links to your site that were not editorially given. And because it's a manual penalty, you'll want to file a reconsideration request.

Q.15) When do you use the rel="canonical" tag? 

A. To implement 301 redirects.

B. To help resolve potential duplicate content problems.

C. To point to the desktop version of mobile pages when you have a mobile subdomain.

D. Both B. and C. (Your Answer) 

Explanation
When the same content appears on multiple URLs, the rel="canonical" tag is used to specify which version is the preferred (or canonical) version. You can learn more by seeing Google's take on it here, or watch a video I shot on the topic here.

Q.16) When do you use meta robots noindex tags?
 
 A. To prevent pages from appearing in search results. (Your Answer)

B. To block search engines from crawling pages on your site.

C. To stop PageRank flow into low-quality pages.

D. All of the above.


Explanation
The noindex meta tag prevents a web page from being indexed (and thus appearing in search results). This tag comes in handy when you have low-quality pages on your site that you are not able to delete. This page from Google explains the noindex tag, and I walk you through how to implement the tag here.

Q.17) What do rel prev/next tags do?
 
 A. They are used by publishers to identify groups of paginated URLs.

B. They cause search engines to treat all links into any of the pages in a group of paginated URLs to be considered links to the entire group.

C. They help eliminate concerns with perceived duplicate content for paginated URLs.

D. All of the above. (Your Answer) 

Explanation
The rel="next" and rel="prev" link elements are used to indicate component pages within a series -- for example, a multi-page article or a forum thread spread across multiple URLs. For more information, check out this post on Search Engine Land or this video on how to implement prev next tags here.

Q.18) Does Google care about where content is located on a page?
 
 A. Yes; they only care about content visible to the user, as they want to emphasize user engagement.

B. Yes; they only care about content in the main body of the page, as the rest of the content is not page-specific.

C. Yes; placement on the page says something about the importance of the content. (Your Answer)

D. No, they don't care. 

Explanation
Where content is placed on a page says something about how important the site publisher believes that content will be to visitors. In addition, it's a fairly classic but low-value SEO practice to place large blocks of content on pages well below the fold. For these reasons, Google does place more emphasis on content that is visible above the fold.

Q.19) When implementing different filters or sort orders for products on your site, which of these should you leverage to minimize duplicate content/thin content risks?
 

A. noindex

B. robots.txt

C. nofollow

D. rel="canonical" (Your Answer) 

Explanation
While noindex does work, it's not as efficient in returning any PageRank from the sort order pages back to their parent page. Noindex pages can pass PageRank, but they pass it through their links like other pages. When you have a rel="canonical" on a page, you are asking the search engines to pass any PageRank back to the specific pages you target with the rel="canonical."

Q.20) When implementing pagination for products on your site, which of these should you leverage to minimize duplicate content/thin content risks?
 
A. rel=prev/next + noindex

B. rel=prev/next + nofollow

C. rel=prev/next (Your Answer)

D. rel=prev/next + robots.txt
 
Explanation
The noindex and rel=prev/next commands conflict with one another. There is no reason to nofollow the links on the page, as this simply blocks the flow of PageRank. Finally, if you also list the page in robots.txt, the search engines won't be able to read the pages to see the rel=prev/next commands.

Popular posts from this blog

SEO Objective Interview Questions..

Q1) Which of the following free tools/websites could help you identify which city in the world has the largest search for the keyword – “six sigma”? a. Yahoo Search Term Suggestion Tool
b. Alexa
c. Google Traffic Estimator
d. Google Trends
e. WordTracker Answer: Google Trends Q2)What does the 301 server response code signify? a. Not Modified
b. Moved Permanently
c. syntax error in the request
d. Payment is required
e. The request must be authorized before it can take place Answer: Moved Permanently Q3)What term is commonly used to describe the shuffling of positions in search engine results in between major updates? a. Waves
b. Flux
c. Shuffling
d. Swaying Answer: Flux Q4)10 people do a web search. In response, they see links to a variety of web pages. Three of the 10 people choose one particular link. That link then has a __________ clickthrough rate. a. less than 30%
b. 30 percent
c. more than 30% Answer: 30 Percent Q5)Which of the following factors have an impact on the Google PageRank? a. The …