# TODO - [x] chore: initial commit - [x] Deploy first staging version (v1.0.0-staging.1) - [x] Wikipedia Database Dump - [x] Download SQL files - [x] Extract SQL files - [x] Tables structure `CREATE TABLE` - [x] `page.sql` (`pages` table) - [x] `pagelinks.sql` (`internal_links` table) - [x] Adapt downloaded SQL files - [x] `page.sql` (`pages` table) - [x] `pagelinks.sql` (`internal_links` table) - [x] Import SQL files - [x] Try `SELECT count(*) FROM internal_links il WHERE il.from_page_id = (SELECT p.id FROM pages p WHERE p.title = 'Linux'); -- Count of internal links for 'Linux' page` - [x] Try: ```sql SELECT il.to_page_id, pl.title FROM internal_links il JOIN pages pl ON pl.id = il.to_page_id WHERE il.from_page_id = ( SELECT p.id FROM pages p WHERE p.title = 'Node.js' ); ``` - [ ] Move from POC (Proof of concept) in `data` folder to `apps/cli` folder - [ ] Documentation how to use + Last execution date - [ ] Rewrite bash script to download and extract SQL files from Wikipedia Database Dump to Node.js for better cross-platform support and easier maintenance + automation, preferably one Node.js script to generate everything to create the database - [ ] Verify file content up to before inserts, to check if it matches last version, and diff with last version - [ ] Update logic to create custom `internal_links` table to make it work with latest wikipedia dumps (notably concerning the change in `pagelinks.sql` where the title is not included anymore, but instead it uses `pl_target_id`, foreign key to `linktarget`), last tested dumb working `20240420` - [ ] Handle redirects - [ ] Implement REST API (`api`) with JSON responses ([AdonisJS](https://adonisjs.com/)) to get shortest paths between 2 pages - [x] Init AdonisJS project - [x] Create Lucid models and migrations for Wikipedia Database Dump: `pages` and `internal_links` tables - [x] Implement `GET /wikipedia/pages?title=Node.js` to search a page by title (not necessarily with the title sanitized, search with input by user to check if page exists) - [x] Implement `GET /wikipedia/pages/[id]` to get a page and all its internal links with the pageId - [ ] Implement `GET /wikipedia/shortest-paths?fromPageId=id&toPageId=id` to get all the possible paths between 2 pages (e.g: `Node.js` `26415635` => `Linux` `6097297`) - [x] Setup tests with database + add coverage - [x] Setup Health checks - [x] Setup Rate limiting - [ ] Share VineJS validators between `website` and `api` - [ ] Implement Wikipedia Game Solver (`website`) - [x] Init Next.js project - [x] Try to use for API calls - [ ] Implement a form with inputs, button to submit, and list all pages to go from one to another, or none if it is not possible - [ ] Add images, links to the pages + good UI/UX - [ ] Autocompletion page titles - [ ] Implement toast notifications for errors, warnings, and success messages - [ ] Implement CLI (`cli`) - [ ] Init Clipanion project - [ ] Implement `wikipedia-game-solver internal-links --from="Node.js" --to="Linux"` command to get all the possible paths between 2 pages. - [ ] Add docs to add locale/edit translations, create component, install a dependency in a package, create a new package, technology used, architecture, links where it's deployed, how to use/install for end users, how to update dependencies with `npx taze -l major` etc. - [ ] GitHub Mirror - [ ] Delete `TODO.md` file and instead use issues for the remaining tasks ## Links - and - - - - How to get all URLs in a Wikipedia page: - - [YouTube (Amixem) - WIKIPEDIA CHALLENGE ! (ce jeu est génial)](https://www.youtube.com/watch?v=wgKlFNGU174) - [YouTube (adumb) - I Made a Graph of Wikipedia... This Is What I Found](https://www.youtube.com/watch?v=JheGL6uSF-4)