Most of time we tend not to know what is the DNS server that is being provided by our ISP.
Just assign a Dynamic IP then login, let it handle the rest of the process.May be because we are lazy or just too busy to do other important work . 8-)
But recently , the surfing become incredibly slow, as
at the status bar of the browser can be seen as
“Resolving google.com.. ” bla..bla..
Hmmm… something is a miss..
Upon checking up.. it turn out the DNS server is not resolving correctly.Then thing can be better if we are using alternative DNS Server other than what is being provided by ISP.
One of the example for alternative DNS server is
1. jaring DNS server at 192.228.128.20
2. OpenDNS server at 208.67.222.222 & 208.67.220.220
3. Own DNS server for intranet/WAN structure.
I prefer the OpenDNS server, because of some sort of smart-cache feature..which somehow pointing user to some direction whenever the real DNS record is unreachable..
Eliminating frustrating authoritative DNS outages to the end user..
To use OpenDNS DNS server for resolving all DNS query
we have to change some setting in our own pc/laptop/router.
For Linux user just need to edit /etc/resolve.conf
World No Tobacco Day is observed around the world every year on May 31. The member states of the World Health Organization created World No Tobacco Day in 1987. It draws global attention to the widespread prevalence of tobacco use and to its negative health effects. The day aims to reduce the 5.4 million yearly deaths from tobacco related health problems.[1] From 1988 the WHO has presented one or more World No Tobacco Day (WNTD) Awards to organizations or individuals who have made exceptional contributions to reducing tobacco consumption. On May 31st, 2008 the WHO called for a complete ban on tobacco advertising; the organization said studies establish a relationship between exposure to cigarette advertisement and starting smoking. [2]
and..with something like this for the cigar cover..
One morning.. i was not able to sleep.. i think it was 2-3 a.m
Feel bored..
I was trying switching TV channel .. some sort of scanning around..
Then I heard something great.. on TV3.. Muzik-Muzik (replay)
It was “Dan Sebenarnya”.. by Yuna & the gang..
i think it was a new nomination ..
as being explained by faizal ismail & cheryl samad..
Hm.. Can vote via tv3.com.my or SMS..
7gbHmmYCmBs
at the moment .. I was kinda speechless..
stunned by her performance .. great voice and lyric..
her look very natural..
that was my first time seeing and knowing Yuna..
and made a note in my mind.. this girl name is “Yuna”..
a gut feeling .. “she gonna shine..” :) Continue reading this post…
Sometime .. while browsing at my own blog for old post or comment.. I can see some Ads are not relevant at all..
Really upset, until I found ways to block certain Ads from being displayed by AdSense..
According to Google .., their crawler for indexing purpose (GoogleBot) do respect the robots.txt content..
If we happened to specify something in robots.txt .. such as of
Disallow: /restrict_folder/
Its crawler then will respect this directive.. and will not crawl whatever inside /restrict_folder/ ..
somehow some other crawler might not respect this directive though..
so Google recommend us to protect our .. not so public page with a password or some sort of authentication ..
Ok.. but if you don’t have robots.txt defined.. or robots.txt is just allowing .. no restricting to folder..
Only then GoogleBot will crawl the page and read its meta directive..
and depending on the instruction at meta for robots.. it might index, archive .. or not archiving based on META tag directive..
If everything is okay.. It will then archive, index and all sort of thing that can be done for searching purpose.
then come the canonical directive in META tag for robot..
what does this one define is…
if the page happened to have two different link pointing to it but displaying same content..
using this directive.. we can define which one to be indexed..
1. login to Google webmaster tool
2. click to Tools at the left menu.
3. the can see “Analyze robots.txt”
my link would be something like .. https://www.google.com/webmasters/tools/robots?siteUrl=http%3A%2F%2Fblog.namran.net%2F&hl=en
this one can test if the robots.txt is properly written.. and either it is blocking crawler to access certain page or not..
just fill in the desired URL into the box provided.. you will be able to see its analyze..
something like this..
p/s : still can’t understand why my recent post can’t be archived/indexed ..though.. since 10th May 2009… can’t recall why.. *sigh*
I was wondering, how does a Web Crawler [wikipedia.org] actually work.
Did a search and found out…
Googlebot is Google’s web crawling robot, which finds and retrieves pages on the web and hands them off to the Google indexer. It’s easy to imagine Googlebot as a little spider scurrying across the strands of cyberspace, but in reality Googlebot doesn’t traverse the web at all. It functions much like your web browser, by sending a request to a web server for a web page, downloading the entire page, then handing it off to Google’s indexer.