The neural net is similar enough to our own that it will have similar foibles to human consciousness. The I don't know question is intriguing. When we theorize prior to experimentation, we don't know.
Teaching AI the I don't know conundrum seems important.