Hacker Newsnew | past | comments | ask | show | jobs | submit | shtylman's commentslogin

Can you hoist the `if (result)` into the `try` part of the statement? (Without seeing more context hard to know why that wouldn't work for you).

Another pattern to avoid the above is to remember that async functions return promises and that .catch() also returns a promise. So your above logic can be written as:

  const result = await funcThatReturnSomeType().catch(doSomethingWithErr);
  if (result) {
    doSomething(result);
  }


And if you hate the indentation from the `if (result) {}` you can combine this with the poor man's ? operator.

    const result = await funcThatReturnSomeType().catch(convertError); // result: SomeType | Error
    if (isError(result)) return result;
    // now result: SomeType
EDIT: the ? operator in question - https://doc.rust-lang.org/edition-guide/rust-2018/error-hand...


You can also get rid of `if(result){}` by setting the return type of "doSomethingWithErr" to "never":

    function doSomethingWithErr(err: any): never {
        throw new Error("Oops");
    }

    let result: SomeType;
    try {
        result = await funcThatReturnSomeType();
    } catch (err) {
        doSomethingWithErr(err);
    }
    // because doSomethingWithErr has return type "never", result will be definitely assigned.
    doSomething(result);

..or just return in the catch block.


Interestingly when I was encountering this myself recently, I discovered that JS finally blocks can return after a function has nominally already returned. Consider the following closure.

    (() => {
      try {
        // finally will return prior to this console.log
        console.log('this try was executed and the return ignored')
        return 'try block'
      } catch (e) {
        return 'error block'
      } finally {
        return 'finally block'
      }  
    })()


I don't think that's the right way of thinking about it. The behavior I see is consistent with my understanding of `finally` from other languages.

Basically, `finally` gives you a guarantee that it will actually run once the `try` block is exited. Likewise, `return` effectively assigns the return value and exits. But it doesn't (cannot and should not) breach the contract of try-finally, since the purpose of try-finally is to ensure resources are managed correctly, so it either exits to the caller or it exits to the finally block, depending on which one is most recent.

In your case, a return value is assigned and the `try` block is exited using `return`. We then have to continue into the `finally` block, since that is the core meaning of `finally` - we run it after we leave `try`. And then with `return`, we reassign the return value and finally leave the whole function. At this point, the return value is the second one that was assigned to it.

Maybe thinking of it like this is helpful, although I somewhat hope it isn't. You can see that "return" is reassigned before we have a chance to read it. I've simplified by removing any consideration of errors, but I console.logged the final output.

    //this is the function call at the end of your IIFP
    next_code.push(AfterMe)
    goto Anonymous

    // this is the function definition
    Anonymous:
        // this is the try-finally idiom
        next_code.push(FinallyBlock);
        //this is the try
        console.log("this try was executed");
        //these two lines are the first return
        var return = 'try block';
        goto next_code.pop();
        //this is the finally
        FinallyBlock:
            var return = 'finally block'
            goto next_code.pop();

    // this code gets executed from the FinallyBlock's goto and is as if you have a console.log(..) around your whole definition.
    AfterMe:
        console.log(result)


It's probably either this behavior or a statement in the finally block that doesn't run, even though it's guaranteed to. Either way, some assumption of normal program behavior is invalidated.

Unless one goes the PowerShell way and just forbids returning from finally.


This is how it's specified for anyone interested: https://stackoverflow.com/a/3838130/298073


What the hell?


> Can you hoist the `if (result)` into the `try` part of the statement?

And now you wrap the function call to doSomething() in the try/catch too. Often (usually?) the try/catch specifically is for the asynchronous function. Usually that's because the async. stuff might fail due to expectable (even if undesirable) runtime conditions (e.g. "file not found"), while synchronous code should work for the most part (external condition errors vs. coding errors - and your catch is about the former, because, for example, you might want coding errors to just crash the app and be caught during testing).

Sure, you can claim that you check for specific errors that could only happen in that function, so that any errors occurring in doSomething() don't matter/don't change the outcome, or that doSomething never throws because you are sure of the code (the async. function may throw based on runtime conditions, but you may have development-time control over any issues in doSomethign() - but if you start going down that path, having to rely on the developer doing there job perfectly for each such construct, later maintainability and readability goes down the drain. You would have to make sure such a claim is valid when you later come across the construct. That is why you really don't want anything inside the try/catch from which you don't want to see any errors in the catch block. So my policy is to never ever do that even if in the given context it would work - it places additional work on whoever is going to read that section later (or they are ignorant of the problem and won't see this potential problem, which is not any better).


Personally I find SQL much easier to read. In a product where you might allow users to write queries it is a much more pleasant UX experience to write and read SQL than it would be to do so for ES queries. It isn't a problem to understand how to build an ES query object but an ES query object is pretty terrible to read IMO.


It is none of their business why I needed my money or how many trips it took.


I understand what you are saying, but the law says different. My main aim with the comment was to point out the (apparent) subtlety, not to declare my views on the situation.


the future is now


Duplicate dependencies affecting client side "size" is highly overrated. We can make tools (and have tools) that can compare files for same content and only include them in client side bundles only once. De-duplicating the same could should be a tooling issue. If I want to use client side modules that depend on two different versions of the same library I should be able to do this. Disk space is cheap and we can do the rest via post processing.


Systems where people seem to live under the assumption that if they don't get the latest patch updates or security updates their code will somehow be inferior and they will instantly be hacked. They think that semver will magically solve the problems of having to keep your application dependencies up to date if you want the latest changes/features/fixes.


Pinning in development is equally necessary if you hope to bring sanity to your development team all using the same codebase. Not pinning may be great for a one person 'team' or modules where you control everything, but can be a major time sink when one person is seeing a bug in a patch version they have and another developer is not seeing the same bug because they are on a newer patch. Allowing developers to "clone" consistent versions of the codebase is as important as deploying consistent versions to production.


Why should they reconsider it? Someone that just got into development and might want to show their friends or family something they are hacking away on. According to you they now need to learn about VPS, some random nginx settings or other SSH nonsense and meet some arbitrary "minimal" criteria you have decided upon because that is how you would do it. I think you should reconsider your acceptance of people that don't share your same technological expertise.


You can. See the README.


Author of the project here.

I want to clarify why this project exists (as many seem to point out that other projects or methods exist for doing this).

TL;DR; If you think of localtunnel as just a shitty ngrok (or name your project here), you are missing the point and probably don't have the same use cases I do.

1. It was made overnight at some hackathon because I was not satisfied with the other tunneling options I found. They required either an account or some stupid ssh setup. I got to thinking of ways to create a tunnel that simply had an CLI tool and instantly get a tunnel no setup. It worked, I kept it.

2. It is written as a library first, CLI tool second. This means it can be used to create tunnels in a test suite if you want to use services like saucelabs to run browser tests (see https://github.com/defunctzombie/zuul). This is leveraged by projects like socket.io and engine.io (among others). This is perhaps the main reason I keep it around despite there being alternative CLI tools.

3. Both the client and server code are availably and easy to install and use. Companies do this when they want to run their own tunnels for privacy (or whatever their reasons... I don't care).

4. Yes, I know the name is identical to the old ruby?python? one. Whatever. That one seems defunct now anyway.


Very cool.

ngrok doesn't have a programmatic API, but I'd love to add one soon. I've built out a library for this in https://github.com/inconshreveable/go-tunnel that will be the foundation for ngrok's next version providing a library in addition to the CLI tool.

Unfortunately, one of Go's weaknesses is that it doesn't embed into other languages like C, so I'd need a ground-up rewrite (in C, probably) with bindings to other languages.

If ngrok the command-line tool had a well defined programmatic interface (like RESTful JSON) would that useful, or is the burden of a separate binary/process to manage still too painful?


To me the ease of having the library be installable with the "canonical" package manager of my platform is too convenient; just "feels" more natural and simpler. I actually thought about writing a node.js ngrok client but then gave up on the idea since localtunnel was working well enough and I personally didn't need the other features from ngrok which I didn't have in localtunnel.

I wouldn't worry about the whole rewrite in c thing. If your server protocol is simple enough, writing clients in the native languages will be better than writing bindings. Installing bindings trips up a lot of users that are not used to compiled software.


Make sense and I agree. Thanks for the feedback! Unfortunately, ngrok's new protocol is optimized very heavily for speed which comes with a cost in complexity of both the protocol and the clients that implement it.


Well thank you for ngrok!

My daily routine is

ngrok start ssh && go home

Stupid firewalls.


I've never heard of ngrok, but the instantly obvious use-case is to allow testing of webhooks to my local machine. In the past we've done this by booting a temporary server on AWS and remote forwarding to our local machines, which is quite a bit more complicated. I already work on a node stack, so the npm install is wonderful. I expect I'll get a lot of use out of this. Thanks!


I didn't even think of this. Recently I was implementing MailChimp webhooks and it was a pain. Thanks for the idea.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: