alt.hn

2/25/2026 at 11:02:58 PM

First Website (1992)

https://info.cern.ch

by shrikaranhanda

2/25/2026 at 11:59:45 PM

I remember that. A few weeks later ran a script to count all the websites on the Internet.. 324 at that time.

by dirk94018

2/26/2026 at 12:37:00 AM

Was your script the very first web crawler or did you just have a list?

by LeoPanthera

2/26/2026 at 1:25:04 AM

I'm also curious because I remember that the first time I used the Internet (not internet, as it is nowadays), I had to buy a paper book with categorized links to websites.

Connecting... Waiting... It was slow, both because of dial-up kbit/s and ping to websites, and every page felt like you were literally sending a request to another part of the planet. It felt like that was actually happening, and it was very different from what we experience now.

But most importantly, there were zero funds/VC in that Internet. Only very niche websites, zero online services, even email was difficult to obtain and felt like a real privilege. Only the fact of being connected made everyone feel not a stranger.

I kind of miss that Internet, but I'm grateful that once I was part of it.

by reconnecting

2/26/2026 at 2:44:51 AM

There’s a page “Robert’s comments on Tim’s MIT trip” that says:

“I hope this does not offend Brewster, but I hope, probably in vain, that the commercialists will stay out of the Web world. Selling information is like selling air and water to me, though of course you need to pay the people who provide the information. Your comment already points out some of the bad side-effects of selling per access, or worse, tariffs per type of information or per item! Like: today's newspaper is 10CHF because there is this item in it which everyone wants to know about.”

Interesting too that an article on the front page the other day was about microtransactions for news.

by us-merul

2/26/2026 at 4:34:50 PM

I wonder which Robert said that.

The problem of viable news business models persists, and micro-payments have been proposed, but I have yet to see a viable implementation. Also, I think paying per news story isn't the right level of granularity. Articles that are less popular also need to be written, and the people that wrote them need food, too.

by jll29

2/26/2026 at 7:03:05 PM

"I'm also curious because I remember that the first time I used the Internet (not internet, as it is nowadays), I had to buy a paper book with categorized links to websites."

I was looking at one of these books the other day called "The Internet Yellow Pages"

In the early 90s McGraw Hill published a book with this title

I found a version published by Que (Pearson Education) from 2007 (I suspect there may even be later editions)

That's 14 years into the public www

This was right before the iPhone

I never used these books. The best resources I remember were lists of sites published via FTP

Some of the nostalgia is still easily accessible via textfiles.com

I still use the internet in much the same way I did when I was first connected through a university. UNIX-like OS, no graphics, 100% command line

The main difference between then and now for me is the hardware and bandwidth

Everything is so much faster

IME, generally any slowness today is due (directly or indirectly) to the commercialisation ("monetisation") of _traffic_, e.g., ads, tracking, or having to use Tor to avoid all the nonsense

Originally the idea of commercial use of the internet was to sell products and services (excluding "advertising services"), not to sell and "monetise" _traffic_

Internet subscriber bandwidth is now used by companies for free to perform data collection, surveillance, telemetry, mostly undetected by the subscriber

For example, the majority of "Big Tech" revenues do not come from selling products and (non-advertising) services but from performing data collection, surveillance and "ad services". Even popular subscription software that predated the web, e.g., MS Windows, is engaged in data collection, surveillance and ads/tracking as a "business". Apple, once a traditional hardware company, is engaged in this activity as well

1. I have been doing some information retrieval experiments and the speed can be mind-blowing

by 1vuio0pswjnm7

2/26/2026 at 5:30:32 AM

Crawler. Heh.. never thought of it that way.

by dirk94018

2/26/2026 at 12:08:28 AM

Wow. Which year was it?

by LowLevelKernel

2/26/2026 at 1:27:22 AM

In 1993, you could refresh the home page of Netscape (Mosaic) every day and it would mention new sites that had been added. That became unmanageable quickly, which is when two dudes from Stanford started a directory.

by alain94040

2/26/2026 at 2:45:47 AM

The NCSA What's New page!

https://www.computerhistory.org/revolution/the-web/20/388/21...

by hackingonempty

2/26/2026 at 5:01:44 AM

I've been trying to track down "What's New" for a long long time. If memory serves, there was a daily email titled "What's New on the World Wide Web" - very possibly the source for this monthly summary.

It was a fascinating way to experience the early WWW's exponential growth. It started out small, but once it began to grow, you could see it expanding faster and faster practically in real time.

At first it only took seconds to give the daily list a good once over. Over time it started taking minutes, then 20 minutes or half an hour (if things weren't too busy at work), and eventually it morphed into almost another full time job. There was just no way to keep up. Around that time they stopped sending it out.

From a historical point of view, these daily emails and monthly summaries would be a terrific resource for those interested in the early Web. It's hard to believe now that there was once a time when you could literally check out every new Web site as they came online.

by jsrcout

2/26/2026 at 8:37:13 AM

If you remove commercial and edu and gov sites it's still doable today to track NEW unique websites today. There are less and less personal webpages due the instagram and fb etc

by iberator

2/26/2026 at 2:58:32 PM

I keep track of these on my website, Well Made Web. You may like a visit: https://wmw.thran.uk/

by HeckFeck

2/26/2026 at 4:43:07 PM

It would be nice if a "What's new?" could be implemented by the Web protocol, a challenging task in a decentralized network.

New domain registrations would have to be queryable and the result set merged across all domain registrars globally.

That would make (completed) Web crawling easier, esp. of pages not interlinked (yet) with others.

by jll29

2/26/2026 at 4:39:23 PM

Why the Web doesn't have a Scottish Accent

I once asked for funding from a Scottish business angel.

He confided to me that his biggest mistake in life was saying no on a phone call by a certain Tim Berners-Lee, who was looking for someone to help implement a browser for the "World Wide Web".

"Why did you reject him?" I asked. "'World Wide Web' sounded pretentious." said the man who got independently wealthy by selling a company that produced hypertext software (incl. browsers) for technical documentation running on Sun workstations...

...TBL turned to the NCSA team in the U.S. instead, and the rest is history.

by jll29

2/26/2026 at 8:59:35 AM

I belive that an early beowser, possibly Mosaic, had an edit button. Think of that and the fundamental change of internet philosophy it implies!

by Daub

2/26/2026 at 9:04:19 AM

The original idea behind a protocol which did updates as much as reads was later realised as Wikipedia and similar sites.

by hdgvhicv

2/26/2026 at 9:12:27 AM

Hence GET / POST / PUT. The WWW was designed to edit documents directly.

by retired

2/26/2026 at 12:16:25 PM

Yeah, and the original HTTP had PUT and DELETE methods.

by kolinko

2/26/2026 at 2:05:57 AM

A little later, but I have a key chain from a dealership that has their website advertised on it, they didn't have a domain name so it's advertised as http://123.123.123.123/web.htm

by Adachi91

2/26/2026 at 4:46:58 AM

Yeah, I made a website as part of a class at BCIT in 1995 and we just had a raw IP so I was showing people my awesome website on http://142.232.162.27 (actual IP) every chance I got.. for all like, 2-3 some-odd people I knew who had computers and had internet access. Luckily I quickly thereafter got a Geocities site which was just a little easier to remember/share lol

by amatecha

2/25/2026 at 11:38:55 PM

The line mode [1] made me pause. Not because you can do anything too useful (most of the cool links are dead, or telnet) but because it seems like a really cool place to explore, learn, and hack.

No ads, no random tits, nobody trying to convert you to their politics, trying to scam you, or telling you to kill yourself. Just people sharing interesting things.

Really makes me excited for the internet until I close the tab.

[1] http://line-mode.cern.ch/www/hypertext/WWW/TheProject.html

by avaer

2/26/2026 at 12:12:55 AM

It just blew my mind! I suppose I shouldn't be surprised at all, JS was written for manipulating the DOM but I was NOT expecting a cool terminal style with a typing/Matrix-style transition animation from some of the first webpages ever.

My brain even ascribed a CRT distortion effect to it, even though that's not actually happening.

edit: okay, no, I am an idiot. Those pages were made in 2013:

https://line-mode.cern.ch/

by bogzz

2/26/2026 at 12:41:12 AM

When this was first created, how did people usually navigate back to the previous page? I notice there are no "previous" or "home" links here. Was there a "back" button/key, or would you have to edit the URL directly?

Edit: Answered my own question I think. If you choose the option to browse "using the line-mode browser simulator", you can literally type in "Back" to go back.

by Nition

2/26/2026 at 12:48:28 AM

This site has a way to experience as it once was. I’m on mobile now, but from what I remember when I tried it, each link opened up a new document window. So the idea of going back wasn’t relevant. You’d simply close the window.

https://worldwideweb.cern.ch/

by al_borland

2/26/2026 at 12:54:34 AM

Yeah, I just wrote an edit to my comment actually after I noticed that. It in fact has an explicit Back command you can run; one of the few commands it supports.

by Nition

2/26/2026 at 4:58:24 AM

It looks like you can also shorten "Back" to "b".

So far, I like this line-mode browser simulator much more than what is commonly available for the command line (lynx or links2). Does any one know of a modern implementation of it? (Where links are numbered instead of the user having to navigate around the document).

by aldto

2/26/2026 at 6:14:32 AM

There are browser extensions such as Vimium C that provide keyboard-based navigation.

by d0mine

2/26/2026 at 12:56:51 AM

We used telnet. There were no graphics per se. Before www the "interactive" internet was gopher and wais and co.

Navigation was moving a cursor around to highlight points of interest, some of which would be links to further stuff or controls to do something like go back or forwards.

Install lynx or links2 (ie text mode browsers) and you'll get the idea.

The vaguely graphic efforts with browsable content that you might recognise before www were the likes of Compuserve. That got you a sort of forum style interface.

It's quite hard to explain just how fast things have moved over the last 40 odd years (I'm 1970 to date - 55). I should also point out that my granddad saw rather a lot of change from 1901 to 1989. To be honest the last 15 odd years are even madder than the previous 25 and that's just my own personal recollection.

by gerdesj

2/26/2026 at 9:40:30 PM

"universal access to a large universe of documents"

It's a sad fact that a large part of the web doesn't work without Javascript, a technology which enables privacy-invasive practices (and surveillance capitalism). It wasn't as bad when progressive enhancement was the norm.

https://en.wikipedia.org/wiki/Progressive_enhancement

by augustk

2/26/2026 at 9:16:42 AM

It’s impressive how quickly the WWW became mainstream, given how few people had internet access back then. Bitcoin is now 16 years old but compared to the WWW in 2008 is hardly used on a daily basis.

by retired

2/26/2026 at 9:46:35 AM

It's not that WWW was quick, it's that Bitcoin is useless.

by shimonabi

2/26/2026 at 10:06:41 AM

False, it was much more annoying to buy drugs online pre-crypto.

by cap11235

2/26/2026 at 11:21:31 AM

How did people do that before crypto?

by flexagoon

2/26/2026 at 2:49:18 PM

Money in mailed in envelopes. Western Union. Prepaid credit cards. Stolen credit cards. Bank transfer to a legitimate business that funnels it to their drug empire.

by retired

2/26/2026 at 12:17:43 PM

For agent to agent payments, Bitcoin/Ethereum could be quite useful now, as traditional payment protocols and banks are not likely to provide decent APIs any time soon.

by kolinko

2/26/2026 at 4:45:09 PM

The original NextStep design's simplicity and colour scheme appeals to me.

I hope someone will write a "skin"/theme for Ladybird (whose August alpha release we are keenly awaiting) that looks like that before I'll have to do it myself...

by jll29

2/26/2026 at 6:35:53 AM

Ugh, memories. I'm so old my first web browser was Mosaic and I think I saw this. I used a provider called Texas MetroNet that served up dial-up PPP connections for $45 a month on a speedy 28.8K baud modem. Days of wonder, I tell ya.

New days of wonder seem to be ahead, though. That said, there's about 100X more angst involved these days.

by TedDallas

2/26/2026 at 3:39:58 PM

[dead]

by trigvi

2/26/2026 at 9:22:52 AM

The sad part is, how infinitely more functional these simple, static HTML documents are, compared to much of the shit that floods the "modern" web.

Ofc these pages cannot replace SPAs. That's not the point. The point is: Much of the web isn't SPAs. And much of what is SPAs shouldn't be SPAs. Much of the web is displaying static, or semi-static information. Hell, much of the web is still text.

But somehow, the world accepted that displaying 4KB of text somehow has to require transmitting 32MiB of data, much of it arbitrary code that has no earthly business eating my CPU cycles, as the new normal. Somehow everyone accepts that text-only informational pages need to abuse the scroll-event, or display giant hero-banners. Somehow, having a chatbot-popup on a restaurants menu-page is a must (because ofc I wanna talk to some fuckin LLM wrapper about the fries they sell!!!), but a goddamn page denoting the places address and telephone number is nowhere to be found.

https://idlewords.com/talks/website_obesity.htm

This talk was given over a decade ago, and its takeaways are as relevant today as thy were back then, and in fact maybe even more so.

by usrbinbash

2/26/2026 at 9:34:09 AM

> Somehow everyone accepts

Everyone did accept that because when you needed information from a page that pulls that shit, you don't have a choice, and when you did have a choice, all the others did it too.

Nowadays people just ask ChatGPT for the information they need so they don't have to visit those awful sites anymore.

by moring

2/26/2026 at 9:51:39 AM

Some of the stuff we have been adding since then is GOOD though.

Some examples:

We now have to accommodate all types of user agents, and we do that very well.

We now have complex navigation menus that cannot be accessible without JavaScript, and we do that very well.

Our image elements can now have lots of attributes that add a bit of weight but improve the experience a lot.

Etc.

Also, things are improving/self-correcting. I saw a listing the other day for senior dev with really good knowledge of the vanilla stuff. The company wants to cut down on the use of their FE framework of choice.

I cannot remember seeing listings like that in 2020 or 2021.

PS.

I did not mean this reply as a counterpoint.

What I meant to say is, even if we leave aside the SPAs that should not be SPAs, we see the problem in simple document pages too. We have been adding lots of stuff there too. Some is good but some is bad.

by demetris

2/26/2026 at 1:36:45 PM

> We now have to accommodate all types of user agents, and we do that very well.

Simple websites don't even care about the UA.

> We now have complex navigation menus that cannot be accessible without JavaScript, and we do that very well.

Is there an actual menu which is more than a tree? Because a dir element that gets rendered by the UA into native menu controls would be just so much better.

by 1718627440

2/26/2026 at 8:02:11 PM

Websites do care about the UA. They don’t care, at least most don’t care, about the User-Agent string. That is different.

About an element that gets rendered into native menu controls, I am not sure. I haven’t been following closely for the last two or three years. But that seems like a good candidate for a native element. 9 out 10 websites need it.

by demetris

2/26/2026 at 2:20:54 AM

Has anyone been able to recover the original source code? The README here: https://info.cern.ch/hypertext/README.html mentions a src/ directory under the same location but it 404's to me.

Would love to see the source for the original httpd.

by WD-42

2/26/2026 at 2:43:44 AM

Maybe here you'll find what you are looking for: https://www.w3.org/Daemon/

Though you can browse and download the latest version 3.0A (1996), there is a directory where they have older versions, but its a bunch of files mixed up with different versions. https://www.w3.org/Daemon/old/

by 0x00cl

2/26/2026 at 4:11:16 AM

Nice! HTDaemon.good, HTDaemon.old.c, some classic version control practices going on here.

by WD-42

2/26/2026 at 9:45:32 AM

This was in Gopher first, where you had to click a link to view a picture. Then I heard about Mosaic, where you can have pictures and text on the same page. Some problems emerged, until I learned you use <p> to separate chapters: https://timonoko.github.io/alaska/index.htm

by timonoko

2/26/2026 at 5:01:25 AM

Xanadu

Ted Nelson's dream since early `60s: all the world literature in one publicly accessible global online system (analogy: you can today get a telephone link from anywhere to anywhere, so why not from any text to any other?). Every reference to a text will lead to royalties being paid automatically to the author. Autodesk, (the makers of AutoCAD) will produce a product "real soon now". Includes the use of full versioning (claimed to be horrifyingly complex), "hot links" (called transclusions) and zippered texts (eg. parallel texts like for translations or annotations.)

by bblb

2/26/2026 at 6:59:04 AM

Sometimes I really miss the pure, text-first web. No popups, no cookie banners, just raw information.

by lukeiodev

2/26/2026 at 9:07:17 AM

No monetisation. Thats what makes everything shit.

You didn’t have cookie banners because you didn’t have cookies, because there is no need for most websites to have cookies.

by hdgvhicv

2/26/2026 at 5:20:25 PM

Interesting that `<dl>` goes back that far, I figured it'd be later.

by someodd

2/26/2026 at 7:31:44 AM

On university campus when our student dorms got internet wired, we first got Gopher, and I remember - because it was hard to follow all these technology developments - that the web was like 'suddenly' there, and we started surfing. Everyone making the switch. Early pages were often copies of their Gopher equivalents.

by rapnie

2/26/2026 at 11:00:54 AM

Sure, but have you heard the Eurovision Song contest entry about the web: https://www.youtube.com/watch?v=Zc9quuVYZF4

by designerarvid

2/26/2026 at 12:13:53 PM

I was expecting some sort of futuristic electronic song, Daft Punk inspired, a bit of synthesizer... not this!

by retired

2/26/2026 at 11:09:53 AM

It reminds me of a fun fact: HTTP/0.9 websites are the fastest because they were created years ago and have simple content.

by santiago-pl

2/26/2026 at 1:18:16 PM

1992 seems a bit late. Wasn't this first website put online on 20 December 1990?

by codeulike

2/26/2026 at 3:24:03 AM

Not bad PageSpeed scores for the first site:

Performance: 100 Accessibility: 86 Best Practices: 92 SEO: 90

by t1234s

2/26/2026 at 12:47:33 AM

This is great. I particularly enjoyed this entry in the FAQ about how to find web pages: https://info.cern.ch/hypertext/WWW/FAQ/KeepingTrack.html

> When (s)he has found an overview page which (s)he feels ought to refer to the new data, (s)he can ask the author of that document (who ought to have signed it with a link to his or her mail address) to put in a link.

> By the way, it would be easy in principle for a third party to run over these trees and make indexes of what they find. Its just that noone has done it as far as I know

by tempestn

2/26/2026 at 2:12:20 AM

In the mid 70's, I was a graduate CS student at USC's Information Sciences Institute. I remember my feeling of awe when I used Arpanet (or was it Darpanet) to log into London and do stuff there. Wow!

by mjcohen

2/26/2026 at 3:56:57 PM

Well that makes me feel old. I remember making my first site on Geocities not long after.

by tom_m

2/26/2026 at 9:04:44 AM

I really like how different and the same the html tags are.

by ghssds

2/26/2026 at 3:13:22 AM

how did we go from this to nextjs?

by vivzkestrel

2/26/2026 at 10:47:11 AM

There's a difference between document-based web sites like this one and web applications. The potential for web applications has mostly emerged as a side effect of the introduction of JavaScript. Back then, JavaScript was supposed to only add minor enhancements to web sites.

by vaylian

2/26/2026 at 12:58:01 PM

Well... Right here on the the very first website Tim Berners-Lee talks about how to build interactive web applications (here called "gateways"), albeit server-side rather than client-side: https://info.cern.ch/hypertext/WWW/FAQ/Server.html

by ptx

2/26/2026 at 5:08:44 AM

Money. Ruins everything. And also enables. So it's a win/lose situation.

by bblb

2/26/2026 at 8:55:33 AM

More than money, it's the curse of going mainstream.

by flipped

2/26/2026 at 9:54:45 AM

I love that there is no attribution. Says a lot about the concept of collaboration and know sharing in those early days.

by telesilla

2/26/2026 at 12:05:46 AM

Banned in UAE (at least on DU)

by whatsupdog

2/26/2026 at 1:03:57 AM

That's rather sad, its just a museum exhibit about the www, so prohibition might look like a pathetic attempt at revisionism.

What is DU?

by gerdesj

2/26/2026 at 8:17:49 AM

Still faster than most websites

by gingersnap

2/26/2026 at 8:29:39 AM

I appreciate the HTTPS support

by vaylian

2/26/2026 at 12:26:49 AM

declaring a website to be "first" introduces a definitional problem.

to put it in terms of a simple example, you need several HTML pages before one of them can link to another, but so far that's just hypertext. then you need pages spread out across plural sites to be able to create a web.

by fsckboy

2/26/2026 at 1:13:26 AM

I found it via gopher and wais - I can't remember which one did what, it was a fair few years ago.

I telnetted from my PC to a VAX, then to a X.25 PAD, then onto a Janet system, then to somewhere in the US and then to CERN. Eventually I'd get a menu with a link to the www. I'd then navigate the www with different keystrokes.

www was/is free form links to stuff instead of hierarchical menus. It was an evolution not a revolution and there is no need to invoke "chicken or egg".

by gerdesj

2/26/2026 at 2:21:45 AM

so you're saying that gopher was the web. i've heard it said before. if you are scared to discuss a chicken and egg problem, you are exactly who should hear it.

by fsckboy

2/26/2026 at 9:17:38 AM

Ah, fond memories... First website in Poland was the homepage of the Faculty of Physics of the Warsaw University (been there at that time). It was 1993 I think, although wayback machine stores a newer snapshot (1998), but it looks the same as original page: https://web.archive.org/web/19980120060239/https://www.fuw.e...

by piokoch

2/26/2026 at 10:11:35 AM

Everyone in silicon valley would do well to remember why the web was built (by other people, elsewhere).

by ballooney

2/26/2026 at 3:34:46 AM

[dead]

by wangzhongwang

2/26/2026 at 1:47:18 AM

[dead]

by bruceyao1984

2/26/2026 at 4:00:59 AM

[dead]

by winston_smith_