Hacker News new | past | comments | ask | show | jobs | submit | Pewpewarrows's comments login

It's worth noting the quote the title is referencing, when examined with any level of scrutiny, is completely unfounded:

https://www.reddit.com/r/AskHistorians/comments/hd78tv/does_...

https://acoup.blog/2020/02/28/collections-the-fremen-mirage-...


A Reddit comment with no academic sources listed should not be considered a meaningful source.

On a similar note, your 2nd source is a blog post about concepts in the book Dune, and contains only a couple references to historic societies - it's not an academic source.

I'm not taking a position on the OP title, but I would find an actual academic study on the possibility of such a phenomenon interesting.


> On a similar note, your 2nd source is a blog post about concepts in the book Dune, and contains only a couple references to historic societies - it's not an academic source.

I don't know about the reddit poster, but the blogger at least has the right credentials: https://acoup.blog/about-the-pedant/:

> Dr. Bret C. Devereaux is an ancient and military historian who currently teaches as a Visiting Lecturer in the Department of History at the University of North Carolina at Chapel Hill. He has his PhD in ancient history from the University of North Carolina at Chapel Hill and his MA in classical civilizations from Florida State University.

That said, it seems like the "hard times create strong men" part of the formulation has been debunked more thoroughly than the "good times create weak men" part.


Does a setting like this also work for the iPad app? That's typically what I watch Netflix on while falling asleep.


Not sure if you are going to get this but yeah it works for all forms of netflix.


Oh awesome, thanks!


Forgive me if this is a dumb comment to make, as I'm just barely starting to get into monitoring and the statistics knowledge that goes along with it, but adaptive fault detection does tend to scare me a bit. In the event that a problem isn't a spike, and instead gradually builds up over hours/days/weeks, I wouldn't be confident in something picking a dynamic threshold for me. I'd be afraid of it deeming the ever-rising resource usage as normal behavior, if it happens slow enough, and me not being alerted before it's too late (servers becoming unresponsive).


That's not at all a dumb comment. As I alluded to in the post, I think it's important that we understand how these systems determine what is - or isn't - an abnormality or fault. Unfortunately, that often means revealing their "secret sauce" and risk exposing their product differentiation. It's going to be interesting to see how these products earn our trust.


Absolutely - this is one of the reasons that we made Kale open sourced so that people can see what we consider an anomaly, and adapt for their own use cases if needed. If your anomaly detection contains secret sauce, it'll be very hard for people to have confidence in it.


> Problem is…people who wear glasses can’t wear Google Glass.

Uhh, what? I was under the assumption, at least from all their marketing material and everything that I've heard about Glass from other people on the internet, that there are models that fit over your existing prescription glasses. No?


Yeah. Google is working on this right now. The problem point from what I understand is making the frames compatible with the IR sensor that does wink detection.

We whipped up a quick adapter in a few minutes using OpenSCAD and found that the IR sensor was the pain point. http://www.thingiverse.com/thing:88426


I may be wrong! Thanks for correcting me. That having been said, it should REPLACE glasses, not be in addition to. Still doesn't solve a problem for their prime target demo.


What's the difference between them offering snap-on Glass over existing prescription glasses, and offering an existing Glass model with snap-in prescription lenses? You need to have one or the other, and in both cases it's an addition to normal glasses, not a replacement. Your requirement can never be satisfied for those with sub-par vision.

As for your main point: for this initial model, the main benefit it offers, that I hear from users time and time again, is the instant photo/video capability. Not having to dig your phone or camera out of your pocket / handbag is a huge win for a lot of people in the initial explorer program.

But that's not why I think the product will succeed. I can see the future potential of an always-on heads-up display, and that's just too useful to pass up. Will it ever get there? Maybe not, but I think it has a greater chance of that than failure.


as is, they can rest on top of your glasses, but it looks weird. Companies are looking into making prescription lenses to fit on to the frame. There will likely be options when/if it becomes a consumer product


Fabric to bootstrap new Salt minions, and Salt for the actual deployment.


This is an awfully wrong approach. As with things like Alcohol, if you teach your children how to operate and act responsibly, they get into less trouble with it as an adult.

If you stilt their growth and don't allow them to touch a gun until they're 18, they won't have developed enough to handle it themselves without supervision.


Wait are you advocating we lower the drinking age to produce more responsible drinkers?

I think you'll find most irresponsible drinkers started before 21 anyway.


"I think you'll find most irresponsible drinkers started before 21 anyway."

Drinking illegally at 18 years old might lead to different outcomes than drinking legally at the same age.


Starting before 21 legally != Starting before 21 illegally

You'd be surprised how much of a difference it makes when you remove the rebelliousness and allure of doing something illegally. You see this time and time again with parents sheltering their children from certain activities or harsh realities.


The "killer features" all come from you using other Google products/services while signed-in to the same account that you use with Google Now.

Search for "pizza" or a specific address on your computer while signed-in to Google Maps? The next time you look at your phone it tells you how long it would take to get there, and offers you turn-by-turn directions, automatically.

Have a flight confirmation email sent to your signed-in Gmail account? You'll be notified on your phone if your flight gets delayed, how long the trip will be, and information about your destination. Same goes for package delivery confirmation emails.

And those really only scratch the surface. The more you use it, the more you come to rely on it, which makes you want to use Google-branded services more and more often. It's a brilliant move on their part.


There's a difference between HN believing that businesses should legally be allowed to do all of the above, and HN believing that it's socially / morally wrong to do any of the above.

(Note that this is merely a reply to your comment in a vacuum, and shouldn't be construed as me supporting either "side" in this thread.)


My statement isn't actually supporting either side of this thread either, just pointing out that their behaviour seems quite consistent with the things HN has vocally defended with all the zeal of a college freshman who has just discovered "Atlas Shrugged."


And again, there's the belief that someone should have the legal right to perform an action, regardless of whether or not that action is reprehensible.

Ex: You should have the legal right to believe that all green-skinned humans are inferior to blue-skinned humans. But that belief makes you a jerk and I will judge you for it.


"You must do what you think is right, of course"--Obi-Wan Kenobi

But we should confuse how you or I feel with the aggregate behaviour of HN in toto. My observation is that as a herd, HN approves of the legality, morality, and sanity of businesses, especially tech startups, making unconstrained choices that may appear "discriminatory" to others.


I'm not sure HN has ever agreed on anything enough to be described collectively as defending it. I suspect the members of the HN community that defended that viewpoint may be college freshman who just discovered "Atlas Shrugged". I don't see much ground to say, "People on HN sometimes say this, so isn't that the ethos now?"


It's impossible and immoral to have two projects with the same name. How dare they.


Well maybe, but from the outside they look like really similar projects (just different languages).


Completely unscientific, but if these outputs are any indication, this is going to be great news for Rubby users in the future...

  $ time ruby -e "puts 'hello world'"                                                                                                                           
  hello world

  real    0m0.184s
  user    0m0.079s
  sys     0m0.092s

  $ time ~/Downloads/topaz/bin/topaz -e "puts 'hello world'"                                                                                                    
  hello world

  real    0m0.007s
  user    0m0.002s
  sys     0m0.004s


There is a neural net example benchmark in the topaz git repo. Don't know how representative that example is, but at least startup time shouldn't be dominating the results...

  $ ruby -v
  ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-darwin11.3.0]
  $ ruby bench_neural_net.rb
  ruby bench_neural_net.rb  17,74s user 0,02s system 99% cpu 17,771 total

  $ bin/topaz bench_neural_net.rb
  bin/topaz bench_neural_net.rb  3,43s user 0,03s system 99% cpu 3,466 total


should be able to run any number of ruby examples from here: http://benchmarksgame.alioth.debian.org/u32/ruby.php


You should check out a tool I heard about recently called "ministat". An example of how to use it to benchmark multiple runs, and then compute the statistics from each run are available here:

http://anholt.net/compare-perf/

Example output:

  +------------------------------------------------------------------------------+
  |                   +                            x                             |
  |                   +                            x                             |
  |           +       +                            x x            x              |
  | +    ++   +++     +                            x xxx xx       x              |
  |++ ++++++++++++++++++                   x  x    x xxx xxx  xxxxx              |
  |++ ++++++++++++++++++ +++ +       ++   xxxxxxxxxx xxxxxxxxxxxxxxxx  xx x    xx|
  |     |______MA______|                        |________A________|              |
  +------------------------------------------------------------------------------+
      N           Min           Max        Median           Avg        Stddev
  x  57      45.62364     46.437353      45.93506     45.951554    0.19060973
  +  57     44.785579     45.534727     45.042576     45.056702    0.16634531
  Difference at 95.0% confidence
    -0.894852 +/- 0.0656777
    -1.94738% +/- 0.142928%
    (Student's t, pooled s = 0.178889)


In order to eliminate the potential bias of startup time I run similar test in many iterations. Here's what I got:

    $ time ruby -e "10000.times { puts 'hello world' }" > /dev/null
    real    0m0.102s
    user    0m0.096s
    sys     0m0.005s
and

    $ time ./topaz -e "10000.times { puts 'hello world' }" > /dev/null
    real    0m0.098s
    user    0m0.071s
    sys     0m0.026s
Any idea why I don't see such big difference?


topaz probably has a pretty slow IO (for bad reasons, it's an RPython problem)


Because the majority of the cycles ran are probably in 'puts', which is implemented in C if I were to guess.


The overhead of starting the interpreter must be longer that executing the loop.


Because a puts loop is I/O bound, not CPU bound.


what is your 'ruby'?


Good question. How many shells is rbenv/rvm executing? Do you get similar results with an absolute path?


ruby that comes with Debian Squeeze.

    $ ruby --version
    ruby 1.8.7 (2010-08-16 patchlevel 302) [i486-linux]


To scratch a quick interesting thought that came into my head I just had a look at Cardinal, which is a Ruby implementation running on the Parrot VM - https://github.com/parrot/cardinal

So downloaded & built Cardinal (which went seamlessly however I did have Parrot already installed) then I did same benchmarks alongside ruby1.8 here:

  $ time ruby -e "puts 'hello world'"
  hello world
  
  real	0m0.130s
  user	0m0.049s
  sys	0m0.071s

  $ time parrot-cardinal -e "puts 'hello world'"
  hello world

  real	0m0.057s
  user	0m0.037s
  sys	0m0.019s
Very interesting because I thought Cardinal was supposed to be slow!

I think more diverse benchmarks are required. And when time permitting I might add Topaz & ruby1.9 into the mix.


Whenever I see frontpages for these kinds of projects like "a faster X" or "X written in Blub", the first thing I want to see on the frontpage is how this new project compares to X in terms of quality and performance. Even specious benchmarks would help more than zero benchmarks.

I wish more frontpages for these kinds of projects would do that.


If they put that on their frontpage, there would be at least 20 posts on here bashing them for it because they didn't get it right (or just accusing them of outright lying/incompetence).


"Even specious benchmarks would help more than zero benchmarks."

I disagree. Zero benchmarks is definitely better than specious benchmarks.


To clarify, I was trying to use "specious" as a synonym of "flawed." I thought this was the common usage, but apparently not.

As to your point, obviously no one should be making any decisions off of flawed benchmarks, but flawed benchmarks (not so far as outright lies, just flawed) at least give me an objective justification to investigate further.

Even some flawed benchmarks could help turn the initial tide of responses like "this is X written in Blub, it's bound to be better!" or "this is a faster X! Now everything will be twice as fast!" They're silly examples, but it seems like every time a new technology comes out, these are the kinds of knee-jerk, overly-optimistic reactions people tend to have.


" To clarify, I was trying to use "specious" as a synonym of "flawed." I thought this was the common usage, but apparently not."

"Specious" does mean, more or less, flawed.

Zero benchmarks are better than flawed benchmarks.


All benchmarks are flawed in one way or another.


spe·cious /ˈspēSHəs/ Adjective Superficially plausible, but actually wrong: "a specious argument". Misleading in appearance, esp. misleadingly attractive: "a specious appearance of novelty".

So you are saying you would prefer wrong information?


http://www.merriam-webster.com/dictionary/specious

1 obsolete : showy 2: having deceptive attraction or allure 3: having a false look of truth or genuineness : sophistic <specious reasoning>

So instead of taking the common meaning of specious, a deceptively attractive benchmark we have to go to a less common use of specious in order to construct an specious argument about the proper use of specious? Nevermind that it is distracting from the main point of the discussion about benchmarks in the context of Topaz.


For what it's worth, amalog's definition of "specious" is the one I'm familiar with. My girlfriend recently quizzed me on her GRE words with that one, and my definition was the given one, too.

So I'd argue that amalog's definition IS the common one.


It depends where you look for the common meaning. To native English speakers in the UK, Australia, New Zealand (ie outside North America, coincidentally including the country where the English language developed) specious has one very clear meaning

I would also direct you to the usage examples in your link. All of which use specious in a context that implies deception of outright falsity. I have never ever seen specious used synonymously with obsolescence.


I read that as "showy (obsolete)". Either way, amalag's says "wrong", the other says "deceptive" and "false", I'm not sure what anyone's arguing about. Specious data has no value.


I disagree with your argument! If inventing the language meant you got to choose all the definitions we wouldn't have English in the first place. Shift happens. I don't think this particular usage is common, though.


I was using "specious" as a synonym of "flawed." Perhaps that was the incorrect usage.

But in the case of advertising a new library/project, which is arguably one of the main functions of the frontpage, flawed benchmarks (though not so far as outright lies) at least give me an objective reason to investigate further.

With no benchmarks, generally I'll open the page, mutter "that's nice" and move on with my business. I'd imagine I'm far from the only person who does that. Young projects don't help themselves when they don't effectively advertise themselves.


Expanding on the concept of entirely unscientific benchmarks, I benched some simple prime number math with Topaz, Ruby, JRuby and RBX.

Looks promising for Topaz! https://gist.github.com/havenwood/4724778


[deleted]


the idea was to implement enough of "hard stuff". So it should not change. but hello world is not a good idea (this one also might actually change due to library loading, but please don't benchmark it like that)


Makes sense. I wasn't claiming any "absolute" benchmark, of course, just pointing out that real benchmarks should wait until the implementation is more complete.


in other words - if you can run it today, you should believe numbers. if you cannot run it, then well, you cannot.


> the lack of many core features probably helps out a lot atm.

I don't understand why the lack of features would affect the time of "hello world."


Loading the standard library takes time. With a smaller (underimplemented) standard library, you can get to the user's code much more quickly.


Perl 5 had an in-memory footprint of about 768k. VisualWorks Smalltalk had a standard one of 12MB, yet loaded and started it faster than Perl did with 768k. If just your standard library takes very long at all to load, something needs attention.


Yeah, a lot of language runtimes seem to not have discovered mmap.


Exactly.

time echo "hello world" hello world

  real    0m0.000s
  user    0m0.000s
  sys     0m0.000s


  $ type /bin/echo
  /bin/echo is /bin/echo
  $ time /bin/echo "hello world"
  hello world

  real	0m0.009s
  user	0m0.002s
  sys	0m0.004s

  $ type echo
  echo is a shell builtin
  $ time echo "hello world"
  hello world

  real	0m0.000s
  user	0m0.000s
  sys	0m0.000s


Try comparing with Rubinius


Impressive, how does it compare to JRuby?


That's probably not a fair test to use on JRuby, as the JVM is notoriously slow to start.


That doesn't make it unfair to JRuby, it just means JRuby will probably lose that benchmark. :)


If the goal is to benchmark Ruby execution time, it's unfair. The slow startup time is a valid concern when considering short-lived sessions, but it's kind of meaningless when trying to benchmark a Ruby implementation.


There are no 'fair' benchmarks, all benchmarks should be biased to the problem you're actually solving running your actual workload. If you can't replay your workload at multiples of real volume then you should probably work on doing that before benchmarking as it helps you out with the real problem of verifying your infrastructure.

In general a benchmark is probably the worst metric you could ever use for deciding on an implementation. Unless the profit margin of your business is razor thin and dependant eeking out every last drop of performance, and even then most of those gains will be from extremely small sections of code that are probably best written in assembler by a programming God, and you should investigate FPGAs, ASICs, and other high performance solutions.

If your benchmark (infrastructure) involves a database (or anything that uses disks) that's probably going to be the problem long before the speed of your language / language implementation.


I don't even know where to begin with this.

In all comparisons, you should remove confounding variables. Yes, you should benchmark something you actually care about, otherwise what's the point? That doesn't mean all other variables are immediately null and void. That's why I said said if your goal is to measure ruby execution time, you should remove startup time.

As for the practice of benchmarking in general, you're partially right. Micro-benchmarks are usually useless because they don't map to real work load. But profiling and speeding up small portions that are used heavily can have drastic improvements that in isolation seem small -- the death by a thousand cuts problem. Not all improvements come from isolated instances with very slow performance profiles.

This fallacy about DB access and not needing to optimize really needs to go away though. Even if 50% of your app is spent hitting DB, you have opportunity to speed up the other 50% and it's likely far easier. Ruby in particular is ripe for improvements on the CPU side. I managed to reduce my entire test suite time by 30% by speeding up psych. I managed to cut the number of servers I need in EC2 in half by switching from MRI, Pasenger, and resque to JRuby, TorqueBox, and Sidekiq. And I've managed to speed up my page rendering time anywhere from 8 - 40x by switching from haml to slim. None of these changes required modifications to my DB, none required me to write assembly, none required me to switch to custom-built hardware, and each helped reduce the expenses for my bootstrapped startup, while improving the overall experience for my customers.


How much does one EC2 server net you in revenue?

What is the percentage increase in profitability yielded from these optimizations?

What was the percentage increase in profitability yielded from the last A/B test of your homepage CTA copy?


An EC2 server doesn't net me anything in revenue. It costs us money and falls into the category of expenses. Reducing expenses makes us more profitable. Reducing our expenses by 50% made us roughly 50% more profitable.

Better than that, this savings isn't one-time. It's recurring, as EC2 is recurring. But we've also reduced the expense growth curve (the savings wasn't linear), so we can continue to add customers for cheaper.

The A/B testing thing is a complete non sequitur. a) there's no reason you can't do both. b) most A/B testing yields modest improvements.


I find it interesting that reducing your EC2 usage by 50% decreased your expenses by 50%, it means one of two things, you aren't paying your employees and don't have any overhead, or the cost of EC2 dwarfs the cost of your employees and overhead.

If it's the latter, I'd seriously consider colo as you can probably reduce costs by another 80%.


Obviously the discussion was scoped around non-personnel expenses. I'm not going to dump an entire P&L here. And this is now wildly off-tangent.

I was illustrating that there is real world gain to be had by doing something as simple as switching to a new Ruby or spending some time with a profiler. These weren't drastic code rewrites. They didn't require layers of caching or sharding of my database. I fail to see what's even contentious about this.


As long as it can reasonably expected to be mostly bug free and support everything you need it to with little changes to the app. It wouldn't need to require to much time playing around with it before the EC2 savings are eaten up by the wage costs of spending time on it. (Depending on how many servers you are running of course)


This isn't a theoretical argument. I actually did this and it didn't take all that long and some of it was even fun. An added benefit is my specs run faster, too. So developer time is saved on every spec run now. You also hit that intangible of improved developer happiness.

Additionally, the X time exceeds Y cost argument really only works when people are optimally efficient. Clearly those of us posting HN comments have holes in our schedules that might be able to be filled with something else.


Amdahl's law disagrees -- the value to improving the non-DB part is limited.


I'm not trying to be snarky here, but that would advocate for no improvements anywhere else. Why did Ruby 1.9 bother with a new VM? Why try to improve GC? Why bother with invokedynamic? Why speed up JSON parsing? Why bother with speeding up YAML? Yet there's obviously value in improving all these areas and they speed up almost every Ruby app.

It's overly simplistic to say the only option is to cache everything. Or that your DB is going to be your ultimate bottleneck, so the other N - 1 items are worth investigating.

And even in the link you supplied, the illustrative example is getting a 20 hour process down to 1 hour without speeding up the single task that takes 1 hour. It suggests there's an upper limit, not that because there is an upper limit you can't possibly do better than the status quo.


" that would advocate for no improvements anywhere else"

Amdahl's law advocates starting from the part that takes the most time. In a database application, it can be interpreted as either A) improving the connector or B) reducing the application's demand for database resources.

"Why did Ruby 1.9 bother with a new VM? Why try to improve GC? Why bother with invokedynamic? Why speed up JSON parsing? Why bother with speeding up YAML? Yet there's obviously value in improving all these areas and they speed up almost every Ruby app."

JSON parsing improves those applications that use JSON parsing, and in many applications JSON parsing is the main operation. There are many other applications for which garbage collection is the limiting factor. You are taking my comment, which was addressing the parent comment's remark that "This fallacy about DB access and not needing to optimize really needs to go away though.", way out of context. It's not a fallacy -- you need to know what is dominating execution time and how to improve that aspect.

Take it to the logical extreme -- you could just write in x86 assembly directly. The program would be faster than ruby, but the development time would not make assembly a worthwhile target.

"And even in the link you supplied"

What link did I supply? I recommend the Hennessy and Patterson "Computer Architecture" book :)


Sorry about the link comment. I'm so used to Wikipedia links being passed around I must have instinctively looked there.

In any event, we probably agree on more than we disagree. I never disagreed with working on the DB if that's truly the bulk of your cost. But, you do actually need to measure that. It seems quite common nowadays to say "if you use a DB, that's where your cost is". And I routinely see this as an argument to justify practices that are almost certainly going to cause performance issues.

Put another way, I routinely see the argument put forth that the DB access is going to be the slowest part, so there's little need to reduce the other hotspots because you're just going to hit that wall anyway. And then the next logical argument is all you need is caching. The number of Rubyists I've encountered that know how to profile an app or have ever done so is alarmingly small. Which is fine, but you can't really argue about performance otherwise.


The issue is that, to some, "ruby execution time" may include the startup time.


But if it's not executing ruby, then it can't be ruby execution time... That's the point I'm making. By all means, if you want to measure start-up time, that's valid as well, just not for "how fast does this execute Ruby."


You may have a finer-grain definition of "execute Ruby" than someone else.


It's not meaningless if you're trying to make a command line app and not a Rails app. Startup time does matter for command line apps (a lot!).


Agreed. But then you're not benchmarking Ruby execution speed, which really makes comparing the two not very worthwhile. By the same notion, then when you compare on something like JRuby, you really should be comparing JDK 6, 7, and 8 builds, along with various startup flags, both JVM and JRuby. E.g., short runs may benefit greatly from turning off JRuby JIT, JAR verification, and using tiered compilation modes. May as well add Nailgun and drip in there as well, since both can speed up startup time on subsequent runs. Which one of these is going to be the JRuby you use in your comparisons? Or are you really going to show 20 different configurations?

You can run into the same problem with MRI and its GC settings. If too low for your test, you're going to hit GC hard. It's best to normalize that out so you have an even comparison. Confounding variables and all that.

There's a lot you can do outside the Ruby container to influence startup time. When comparing two implementations, the defaults are certainly something to consider, but not when trying to see which actually executes Ruby faster. They are two different metrics of performance and should be compared in isolation.


Well startup times of the JVM aren't terribly relevant if, for example, your app doesn't need to start the JVM every time it's used.


In production server apps the JVM always running anyway, and under some degree of Hotspot optimization, so for a JRuby benchmark to be informative and worth anything to you, you'll want to account for that.


[deleted]


Topaz doesn't run rails yet (as far as I know, I didn't even dare to try!), so I doubt you'll find any benchmarks ;) There is one benchmark in the bench/ directory of the repository you can try though!


Instead of asking, you could read the linked article, which itself says it isn't complete enough to run Rails yet.


How the hell is your Ruby that slow to start?

    $ time ruby -e "puts 'hello world'"
    hello world
    
    real    0m0.011s
    user    0m0.008s
    sys     0m0.003s


The first time I ran:

   time ruby -e "puts 'hello world'"
   hello world

   real	0m0.221s
   user	0m0.005s
   sys	0m0.006s
subsequent times:

   time ruby -e "puts 'hello world'"
   hello world

   real	0m0.008s
   user	0m0.005s
   sys	0m0.003s
 
So, I guess he ran ruby first followed by topaz and ended up with those results


    $ time ruby -e "puts 'hello world'"
    The program 'ruby' can be found in the following packages:
     * ruby1.8
     * ruby1.9.1
    Ask your administrator to install one of them

    real	0m0.060s
    user	0m0.040s
    sys	0m0.016s


Don't know if you are trolling. But if this is genuine: this is rbenv telling you that you have multiple rubies installed.

1. Pick one (just for this session): $ rbenv shell ruby1.9.1

2. And then run the example.

By the way 1.9.1 is really old already, 1.9.3 has a lot more bug fixes.


ruby1.9.1 on Debian isn't ruby 1.9.1.

The Ruby language changed between 1.9 and 1.9.1, so a new package name had to be created.

If 1.9.1 was just called "1.9" it would break all of the packages in Debian that depend on whatever language features were different between 1.9 and 1.9.1.

"ruby1.9.1" in Debian 7.0 provides version 1.9.3.194.


The OP's output looked more like rbenv's output though, right? How did you figure that to be debian's message?


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: