Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Please critique vldtr (vldtr.com)
10 points by jeroen on July 28, 2009 | hide | past | favorite | 12 comments



Nice piece of work. A few observations: (1) It would be good to be able to link from the site name itself in the results rather than having to hit the W3C/HTML/Trash icons. Also, I would show the icons by default rather than only on hover since otherwise the user might not discover them. (2) The randomly changing background color on your page is 'cool' but it can be confusing for the user. A few times I thought I'd back-buttoned onto a different site. (3) This is a very minor point but the vertical lines which contain the 'body' of the homepage look a little 'out' at some zoom levels on FF 3.0.12.

HTH.


Thanks for your comments. I'll try some alternatives for (1) and see if I can come up with a better UI.


+1 Do you have ideas for more features? Since the validation gets passed to W3C, the main or only added feature is saving your URL list.


I started building this after a typo in one of my (otherwise perfectly valid) sites totally broke the layout in IE. With vldtr I now have an easier first line of defense against such breakage.

Another site I've inherited doesn't validate at all. It has lots of pages that I can now revalidate all at once after each deployment.

So indeed, the list is the main feature which distuingishes it from using the w3 validators directly. I'm considering more features, but I'm still thinking them through.


Not a bad idea, but it doesn't seem to work with XHTML which is served according to spec. I've got a site that's valid XHTML, which is served up as application/xhtml+xml, which works fine in Firefox/Safari/Chrome/Opera (and not at all in IE but I don't care), and it validates fine by validator.w3.org.

vldtr says "2 errors", and if I click the "W3C" logo next to it there, it sends me to the W3C's "Feed Validation Service", which rightfully complains that "It looks like this is a web page, not a feed".


A bug in my code to determine the type of content. It should work now. Thanks for reporting!


With NoScript, I had to allow kruisit.nl before http://vldtr.com/?key=HN would do anything besides list the urls. It's not a big problem (just makes me a little nervous), but since you seem to control both servers you might want to move whatever is getting called from kruisit.nl over to vldtr.com.


Thanks. It was jquery, and I've moved it.


For a quick look at results: http://vldtr.com/?key=HN


It looks like you're making requests directly to the online validators (this is a guess based on how slow it is). Why not just run a local copy? The source for the HTML validator is available, I would be surprised if the others were not available too: <http://validator.w3.org/source/>;


This project is only weeks old. I wanted to get it running asap. I'll take the time to get the validators running locally soon.


This is awesome. This scratches an itch I always felt I had but never consciously realized. The combination of saving your list, with the discover feature. Good job.

Edit: also plus points for being dutch (I'm flemish).




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: