wget can recursively pull file from a webpage - provided that webpage has links to other files. This is how recursion happens.
You can't enumerate from a home page to all of its subpages if you don't have any links in the home page. Logically there is no way for wget to find what all pages in that domain (brute force search is simply not practical).
If you have links to other pages, then you can use
--accept-regex urlregex
wget
to restrict what links are recursively pulled.
In your case, if you have one web page which provides links to say "path1, path2 ..." and each pathX provides further links, you can do what you want through