I was checking the site giftedandtalented.com. first of all the the source code is so cluttered. When i disabled the JS in my browser, except for top navigation nothing showed up but when i checked the site in text browser, the content showed up. How did that happen. does it have an alternate way of reading the site content? my intention was to check if the content is read-able by Google but webcache was not showing up.
why some websites have such cluttered source code?
As for messy code? It doesn't matter as long as it works. Perhaps it is to make it a little more difficult to 'steal' code. Perhaps it is a lazy programmer. Perhaps the site is dynamically created and they haven't put in the work to format the code properly.
It makes no difference to a browser, and that is all that matters.
when i checked the site in text browser, the content showed up
On the other side it is possible to deliver the content (and even completely different sites) based on user agent. So if text browser is recognized as a specific user agent, the server could deliver a special site's version, build for such cases.