Hacker Newsnew | past | comments | ask | show | jobs | submit | cosarara97's commentslogin

This is kind of an offtopic question, but regardless: Is there any application written in smalltalk in the debian repository? (or any other linux distro)


The only people I've heard of using Smalltalk at all other than in dogfood situations are people using Gemstone.

https://en.m.wikipedia.org/wiki/Gemstone_(database)


JPMorgan have a fairly serious Smalltalk app http://www.cincom.com/pdf/CS040819-1.pdf

However, only one. Make of that what you will.


They only have on app because Smalltalk if agile enough they build everything they need into it as the need arises - and this has been maintained over many years.


Gemstone is not "using" Smalltalk. Gemstone "is-a" Smalltalk.

Here are commercial companies that believe in Pharo (in the money on the table way of measuring belief)

http://consortium.pharo.org/


Interesting. So am I right in assuming there are:

    - 9 bronze level (1k Euro/year)
    - 5 silver level (2k Euro/year)
    - 11 gold level (4k Euro/year)
    - 5 platinum level (8k Euro/year)
...members of the consortium (based on the color of the surround on the member label/icon)? So that would be ~103,000 Euro per year? Or is that signifying a one-time contribution (vs. recurring annual)? It doesn't look like there has been a consortium report published since 2015:

http://consortium.pharo.org/web/reports

...any ideas what that means? (i.e. they stopped publishing them for privacy reasons, the consortium no longer has meetings, the consortium is no longer active, etc.?) Any ideas on what projects they are currently funding, and how many man-hours of programmer time are being purchased for Pharo in 2018?


Its a recurring fee. Of course, its dependent on members feeling they are getting value for money and continuing each year. The platinum level was only recently created.

I don't know what it means that there is not later reports. The consortium is definitely active. It arose because open source software is about people scratching an itch, and often the focus is on the fun stuff and not on the boring, difficult, dirty engineering required to make a product reliable and successful (e.g. operating a CI infrastructure). Pharo arose out of INRIA[1], one of France's publicly funded national research institutes. But their mission and governance structure is not suited to managing non-research engineers dedicated to working on Pharo. So InriaSoft[2] was created to fork off its successful software into consortiums funding pure engineering work on the software as an ongoing concern. At the moment the consortium has one full-time engineer, but hope to get another in the next couple of years if the consortium continues the current growth rate.

AFAIK (I am not directly involved), the main efforts decided by consortium members were:

* Stabilizing 32-bit to 64-bit conversion for Pharo, utilizing and contributing to the OpenSmalltalk-VM project.

* Stablizing Iceberg as our Git/GUI interface

* Maintaining CI infrastructure

btw, There is also the Pharo Association with 82 individual members. [3]

[1] https://www.inria.fr/en/institute/inria-in-brief/inria-in-a-... [2] https://www.inria.fr/en/news/news-from-inria/launch-of-inria... [3] https://association.pharo.org/


I know of a number of smaller business that are "as a service" that are built using Smalltalk.


Way back in the day, "DabbleDB" was written in Smalltalk. (It was a Zenkit/Airtable type web app.)

If I remember right, they had a separate image per customer and it took a moment to bring up the VM if hadn't been to the site in a while.


Sadly they got acqu-hired by twitter - and judging by some (not bad, just in lack of enthusiasm) comments, I got a sense that they sold dabbledb at peak technical/architectural debt...

It's a bit sad, I appeared to be a Google sheets meets office 365 flows on steroids before either was any good:

https://youtu.be/6wZmYMWKLkY

(i think that's from 2006 or so, archive.org has more info, but it's a bit painful to navigate on mobile)

Avi Bryant comments on hn from time to time:

https://news.ycombinator.com/user?id=avibryant

As I understand it, each customer got an image/vm - and data was just stored as Smalltalk objects. It'd been interesting to see where they might have gone if they coupled the dabbledb front end with gemstone/s.


Gemstone provides a database. There are very large Smalltalk based applications that run using Gemstone as the backing database[1] . These applications are, for the most part, very large in-house applications that don't have much external visibility.

[1] http://seaside.gemtalksystems.com/docs/OOCL_SuccessStory.pdf


DrGeo is a geometry package for education.

http://www.drgeo.eu/

MOOSE, which is some sort of code analysis tool:

http://www.moosetechnology.org/

I've never been able to make heads or tails of it, but it's kind of an architect-level tool and I'm just a lowly engineer.


The first one that comes to mind is the scratch package[1]. The entire scratch language, as far as I know, runs on top of squeak

[1] https://packages.ubuntu.com/source/trusty/scratch


Scratch was rewritten in JavaScript a few years back, AFAIK.


It was rewritten in Flash and now is being rewritten in Javascript. An independent project created an extended Scratch clone in Javascript called Snap!.


Gamers use steam, not the windows store.


How do they compare to the telegram desktop client? Are they electron apps?


Bitcoin's energy use will tend to raise to match the mining profits. As long as both BTC's price and transaction fees keep raising, the energy miners spend will continue to raise. BTC's price is going up due to speculation, and transaction fees due to blocks being full.


Even at current levels, BTC's built in block rewards dominate tx fees (12.50BTC built in + 1.38 tx fees). The 12.5 built in reward is essentially a subsidy for mining.


The block reward is an incentive for redundant distributed replica nodes.


In my experience, a misbehaving linux system that's out of RAM and has swap to spare will be unusably slow. The process of switching to a tty, logging in, and killing whatever the offending process is can easily take a good 15 minutes. Xorg will just freeze. Oh, and hopefully you know what process it is, else good luck running `top`.

Until this is fixed, I'll just keep running my systems with very small amounts of swap (say, 512MB in a system with 16GB of RAM). I'd rather the OOM killer kick in than have to REISUB or hold down the power button.

Some benchmarks with regards to the performance claims would be nice.


> In my experience, a misbehaving linux system that's out of RAM and has swap to spare will be unusably slow.

Yeah, this is basically the main drawback of swap. I tried to address this somewhat in the article and the conclusion:

> Swap can make a system slower to OOM kill, since it provides another, slower source of memory to thrash on in out of memory situations – the OOM killer is only used by the kernel as a last resort, after things have already become monumentally screwed. The solutions here depend on your system:

> - You can opportunistically change the system workload depending on cgroup-local or global memory pressure. This prevents getting into these situations in the first place, but solid memory pressure metrics are lacking throughout the history of Unix. Hopefully this should be better soon with the addition of refault detection.

> - You can bias reclaiming (and thus swapping) away from certain processes per-cgroup using memory.low, allowing you to protect critical daemons without disabling swap entirely.

Have a go setting a reasonable memory.low on applications that require low latency/high responsiveness and seeing what the results are -- in this case, that's probably Xorg, your WM, and dbus.


And a multigigabyte brick of a web browser.


You can use Alt+SysRq+f to manually call oom_kill.


On many distros, this is disabled by default because there's a chance that the OOM killer will hit something important, like the screen lock. For Ubuntu, enable it in /etc/sysctl.d/10-magic-sysrq.conf.


If you’re using logind, this isn’t a problem – if the screenlocker dies, it will be restarted up to 3 times (without revealing screen content), and if it is killed yet once more, your screen just won’t unlock (you can then unlock via a tty by logging into that and using loginctl session-unlock).

If the daemon responsible for this is killed, all your sessions will simply be killed.

In no situation will your screen unlock due to a process being killed (in contrast to the pre-logind world, where, if the screenlocker dies, your screen is free for all, as there the locker is just a fullscreen window)


In 2005 I was able to run Linux on 512MB RAM _without_ swap (on purpose - every day) without issues. Today it will bark at me on 8GB of RAM for not having swap enabled.


I'm running on an 8GB Linux box without swap and never even come close to running out of memory. If I don't have any VMs running, then it's pretty unusual for me to use much more than 1-2 gigs. It's interesting... one of my colleagues has serious problems with performance because he keeps running out of memory -- and I don't think he's doing anything unusual.

I think there is something wrong with some of the major distros. I got really fed up with Ubuntu because of random junk running without my approval and eventually migrated to Arch simply because I have a lot more control over configuration. I don't mean to trash one distro over another because each one has its strengths and weaknesses, but I'm been surprised at how bloated the average Linux install is these days. I'd love it if there was more attention paid to it.


> I'm running on an 8GB Linux box without swap and never even come close to running out of memory. If I don't have any VMs running, then it's pretty unusual for me to use much more than 1-2 gigs. It's interesting... one of my colleagues has serious problems with performance because he keeps running out of memory -- and I don't think he's doing anything unusual.

64gb here and 40gb used. Firefox alone uses 2 gigs with a mere ~30 tabs.


I run KDE, Firefox, Slack (browser based app), Chromium, VS Code (browser based app) (plus Gvim, shell etc) on Arch and it's currently steady at 3.9 GB (of 16)

Edit: Also be sure it's actually used: https://www.linuxatemyram.com/


Browsers have a tendency to use some percentage of available RAM. Firefox using 40gigs on a 64gig machine doesn't mean it'll try to use 40gigs on a 8gig machine.


What function does the 0.5GB swap have?


I just wish Linux distro installers would make opting out the default option; no, I don't want to swap on my SSD. The last time I installed a distro, I still had to select the manual option for partitioning.

With an 8 Gig stick in my NUC, for normal desktop usage it never goes above 3.


And even then; they provision an absurd amount of it. I just did a fresh Ubuntu install. On a machine with 32 gigs ram, it creates a 32 gig swap partition by default!


They probably reason that you want your system to be able to hibernate, plus storage is cheap.

I have 8GB of swap for the 8GB in my laptop, for that reason.

On my desktop with 16GB, the w 2GB of swap it has is sometimes too little, and everything grinds to a halt.


32GB of SSD isn't that cheap!


A last-ditch safety buffer, to induce the slowdown so that you'll recognise that RAM is running low and hopefully prevent from actually completely running out.


By that time, my system is usually responding to simple keypresses with latencies >1min... :|


I actually did this on a old laptop, setup 200mb of swap for 4gb of ram.

And it caused huge problems for me, would run out of swap while having plenty of free memory and then go cripplingly slow.


Set the swappiness lower.


That only delays the inevitable. Try setting overcommit to 2 and ratio to 100, then note apps crashing.


It doesn't go lower than 0. Swapping can still hose your machine when swappiness is set to 0.


The point being is, that a system doesn't have to misbehave to allocate more memory than the total RAM. And in those cases, there is a very good reason to have swap space, and swapping won't impact the performance of the system - rather the opposite.


Sure it does misbehave. The memory allocation failures should be handled properly and by that I mean not by crashing. Very few applications should require memory use beyond current free RAM. Especially not JVM, JavaScript VM, a web browser or even video player. Yet this silly heuristics in Linux lets it happen.


No, that is not correct. If you have any idle processes, their memory can safely swapped out without impacting performance. The user should not be forced to quit programs as soon as they become idle. Also, as described in the article, a program often allocates pages, which also become unused and can be swapped out. In an ideal world, a program would not allocate pages which become idle, but that happens with complex software (and often depends on the user interaction, and thus is not completely predictable). Swapping out idle pages is a very simple solution to make more memory available for the active processes.


AMP, that thing you have never seen in the wild if you run firefox on your phone.


AMP pages show up for me when I use Bing in Firefox but not when I use Google.



IANAL, but I think you can't infringe copyright if you are not distributing anything. OCR is not distribution.


You might think so, but merely copying is infringement, at least in the United States.

There's even good solid precedent that copying computer software from disk to memory for the purpose of running it is a copy covered by copyright. A manufacturer sued a third-party computer maintenance company alleging copyright infringement by the technician in simply turning the machine on, and won.

There's even a specific carveout for software, section 117(a)(1), but the same decision held that since that section applies only to the "owner" of a copy, it doesn't apply to licensed software.

It's a pretty bonkers case. Congress explicitly overruled it by adding _another_ carveout in the same section, 117(c), which basically specifically says computer repair people don't infringe copyright in the OS by turning the computer on.

Regardless, mere copying with nothing else can be infringement.


If signal users can be counted on one hand, people using gpg can be counted on a finger.


I don't see a need for GPG at all. There are tons of options for sending email that is encrypted enough to lock out the Iranian government. Any popular mail service would work, and using several email addresses at different mail services would work best.

Also, organising a protest isn't about keeping individual messages secret. You can never keep something secret that you want thousands or hundereds of thousands of people to know about. It's all about getting the message across fast without the government blocking it.


And without knowing who you are, a requirement most services (including Signal) fail. Does Telegram allow you to create an account without divulging your phone number?


I don't know. I'm not a Telegram user and I wouldn't use any of these messaging services if I wanted to organise a protest.

Email is the only reasonably decentralised and diverse messaging infrastructure we have. It's impossible to block without shutting down the entire Internet in a country.


SHENZHEN I/O is the actual name: http://store.steampowered.com/app/504210/


Isn't sdl2 a lot smaller than qt5?


They list both as a dependency, that's why I mentioned it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: