Hacker News new | past | comments | ask | show | jobs | submit | cleardusk's comments login

Thanks for the reminder. The homepage is a GitHub page and does not support git lfs, so I have compressed the files as much as possible to reduce their size. We consider re-encode the mp4 files to x264, and provide a packed zip of the homepage.


I scanned the open codec page: https://en.wikipedia.org/wiki/List_of_open-source_codecs I'm a little confused, is H.265 not OPEN? :-(


It's a bit of a mess. The implementation of a codec, that is, and encoder or a decoder can be open source, despite the format itself not being open. H265 does have open implementation, but the format itself is not open. The opposite can be true as well, there are proprietary encoders to open formats for example. Actual list of open video formats: https://en.wikipedia.org/wiki/List_of_open_file_formats#Vide...

What OP meant is that they would like an open format on the website, which, then, can be viewed in any modern browser. I think that caniuse is a good resource in this regard.

https://caniuse.com/av1

https://caniuse.com/hevc

WebM with VP9 video is a good general browser target I think:

https://caniuse.com/webm

But funnily enough, even though h264 is not open, it's a widely decoded video format as well:

https://caniuse.com/mpeg4


This is exactly why I am not convinced that VVC is going to be useful; seems to have little advantage over AV1, as well as being late to the party in the first place.


Well yeah, they wanna rent, so they have to develop these things. It also depends on what business deals they make in the background. If the format is secured in some applications, that might cement it as a quasi-standard, which then they can leverage for further popularity.

I hope open standards keep winning. Overall, everyone wins with the infrastructure being openly accessible, especially the common folk.


https://github.com/deepinsight/insightface?tab=readme-ov-fil... ---------- The code of InsightFace is released under the MIT License. There is no limitation for both academic and commercial usage.


that is the code. the weights are non commercial

Both manual-downloading models from our github repo and auto-downloading models with our python-library follow the above license policy(which is for non-commercial research purposes only).


Understood. The core dependency of InsightFace in LivePortrait is the face detection algo. The face detection is easily to be replaced with self-developed or MIT-licensed model.


Exactly just replace it with any segmentation model lol, FastSAM or a YOLO model can find the face lol. No reason to be using insight for that.



Fixed! h_h


:)


Not a random face. But it may not look very similar.


The access is OK.


Thanks, I will consider adding it later :)


Thanks for your advice. I have no experience on a cooperative project before, so my committed messages are kind of meaningless. I think you are right. I will read the committing guidance and take care of them later. Thanks for your critical voice again.


Having flexibility for “pointless” commit messages is really important for research-y projects. Sadly, the practice often clashes with readers who have never really done research before.

A nice compromise is to use Github PRs plus squash-merge commits (search for “Github squash merge button”). For example, you might start a project, commit a bunch of “garbage” commit messages, and then decide the project is ready for initial release. Then take your branch, create a PR, and squash-merge it to your master branch with a nice commit message.

Need to update your paper on arxiv? Create a branch, commit willy-nilly, the squash-merge the result with a nice message (that perhaps references the updated arxiv version).

If for some reason your project grows like Caffe did years ago, then it can be time for smaller PRs and more organized commit messages.


Thanks for your detailed reply and explaination. I learned from it.


Thanks for your advice : )


Welcome to give it a try :)


I had some problems getting it to run but after some handholding I managed to get all the demos to run. Amazing stuff!

I'll try to open an issue with all the problems I encountered.

I would also appreciate a demo with data output as well (actual 2d/3d points) along with a short description of what the format is.

Can this be used real-time?


Thanks for your interest and try. The theory computation complexity is described in the paper, it is rather small. However, whether achieving real-time really depends on your hardware, your need and the code optimization.


could you please share the training code so that I could better understand the implementation?


thanks for your interest, releasing the training code should be granted by my lab leader. I am trying to apply for it. If you have any question, welcome to raise an issue or email me :)


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: