Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm wondering, given a random truth-table with N binary variables and 1 binary output, what is (worst case) the smallest network that can learn it? (In terms of number of parameters).


For a 100% success rate, it would have to be of the size of the minimal BDD. If you can allow for some errors, this becomes an interesting problem.


Yes, but that's probably the theoretical minimum.

What I'm talking about is the size that is required so that the neural net can learn it. This may be different.


Can you define BDD, please?



This is technically true, but I wonder how close you could get to 100% with minimal size. I would expect that you could get somewhere around >98% with a network that's a few megabytes in size.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: