Compare commits
664 Commits
not_sure
...
26287215c5
| Author | SHA1 | Date | |
|---|---|---|---|
| 26287215c5 | |||
| 472966cb61 | |||
| 332cc9da84 | |||
| da21ed543d | |||
| db32a874fd | |||
| 6b0823ad6c | |||
| 2a69244f36 | |||
| 397b4e6001 | |||
| 42c42985ee | |||
| 37df4c8003 | |||
| cb0e14e1c0 | |||
| ed56e56a2c | |||
| d65fa79e26 | |||
| 3d861ecf08 | |||
| 5792fb3bb1 | |||
| 53752dfc55 | |||
| aea782bda2 | |||
| da7a2e35c0 | |||
| 998c4ff35c | |||
| 7b31ac7f81 | |||
| 3b96b67d69 | |||
| 21496ee58f | |||
| 5e320b2bbf | |||
| dfb4fa1b26 | |||
| c746276090 | |||
| ece46f922c | |||
| 2a2675e386 | |||
| 0454b94bfb | |||
| a339fae467 | |||
| e60325d819 | |||
| 8b19deeeff | |||
| 173ea96fb4 | |||
| 8bfc41ddba | |||
| 39bbc3f31b | |||
| 2907eb9f93 | |||
| 7a0e8e6366 | |||
| f5e43c7987 | |||
| cc806d2cc6 | |||
| ee2d6f4234 | |||
| e8513d563b | |||
| 579ee8394f | |||
| f0415a40aa | |||
| e8833b6656 | |||
| ec91c90c15 | |||
| 53c2bbfa81 | |||
| c368caf43a | |||
| b801e1668d | |||
| 8c5a560787 | |||
| 42af2e1fa4 | |||
| 46c2f9a0ca | |||
| ca04026db5 | |||
| c428e4331a | |||
| 60396f03f8 | |||
| 07f4e36016 | |||
| 3216e877b3 | |||
| 602cea6c13 | |||
| c816f65665 | |||
| a2a1447f58 | |||
| d36632c21a | |||
| f2512c30e9 | |||
| db118f0a5c | |||
| db069abe83 | |||
| 196d9f12f3 | |||
| 866b3f0fe7 | |||
| 87df32c32c | |||
| c062361ef9 | |||
| bc261c6cbe | |||
| db65162bbf | |||
| c75b926c45 | |||
| 7a1fe1723b | |||
| e93e2eaa40 | |||
| 2a30e62621 | |||
| 173ffc31de | |||
| 858c4c27a4 | |||
| 2ccb4e9813 | |||
| 57d187b8bd | |||
| c3b108e77c | |||
| 605dfc3149 | |||
| 51ab417bbe | |||
| b1fdcf72c5 | |||
| 24c46b8934 | |||
| 82f73e7267 | |||
| 4b450e01b8 | |||
| a67c318238 | |||
| 75569039e3 | |||
| 25b72fba7e | |||
| e367f52d90 | |||
| 7252d759ef | |||
| 6f61496a44 | |||
| 2b1cfbb34d | |||
| a97eb2a222 | |||
| 913cfee2dd | |||
| 3c7d4cd841 | |||
| a6c627a6b5 | |||
| 21157f92c3 | |||
| bee75e7b4d | |||
| 4c53ca11da | |||
| 1017a4d807 | |||
| e293c5e302 | |||
| c2c8732100 | |||
| d7a24d66ae | |||
| 528aaf1957 | |||
| f59ef247cf | |||
| 2ece9e1141 | |||
| 4c744f2c8e | |||
| 0ed01aa1c9 | |||
| 34bd61aa6c | |||
| 6aa642bc42 | |||
| a84ea40d16 | |||
| fcd60c908b | |||
| 5608d8d6cd | |||
| 7adacd06b7 | |||
| a6e264bb4e | |||
| 138e31374b | |||
| 6c887e498d | |||
| bf1faac4ea | |||
| a744b39e4f | |||
| c2c0b41571 | |||
| 5f748c4de3 | |||
| 6548ce6496 | |||
| c15e8b8d1f | |||
| 2d355d4461 | |||
| a9436cbdad | |||
| 2429b7c1b4 | |||
| 154957fe57 | |||
| f85ec9d06f | |||
| a3cfeff9d8 | |||
| 3c0d412219 | |||
| 46e11bccdc | |||
| b845b89543 | |||
| 134a11cdc2 | |||
| e1a3712d9a | |||
| a5684bf773 | |||
| 66b63ed010 | |||
| 2efe80e617 | |||
| ef7040c3fd | |||
| 0dedcc1773 | |||
| b5b89f2f1b | |||
| 6e0948467f | |||
| 41ae3df75d | |||
| cca9ef9307 | |||
| f0f285bc26 | |||
| d10a663111 | |||
| b3d972d19d | |||
| 7a614cbe8c | |||
| 3b2d82ed0d | |||
| 8438f69197 | |||
| d087a20f7b | |||
| f05fa3d340 | |||
| 987634be53 | |||
| 254bcdf2b3 | |||
| 716d8b4e13 | |||
| 332fc4d774 | |||
| 63a82e0d15 | |||
| 51918d9bc3 | |||
| 94a1c320a5 | |||
| 8bb72e351d | |||
| 971202e21b | |||
| 1294091692 | |||
| d4574dba41 | |||
| 3982fda5f5 | |||
| dce1679a1f | |||
| 68861c0744 | |||
| 5206c7c569 | |||
| 1dacd3613e | |||
| 0acd1ea442 | |||
| a28d71b064 | |||
| 6be093cfc1 | |||
| 695cb4a82e | |||
| 47d750ea9d | |||
| 61d17ade0f | |||
| a5854b1488 | |||
| fb3da4de36 | |||
| 80a10f4d12 | |||
| 8e4e32690c | |||
| bb2f7a16d4 | |||
| bc654c2f57 | |||
| a978562f55 | |||
| e6c8d734cc | |||
| bc0cba4d3c | |||
| 1afd9c8c2a | |||
| cfd20c027d | |||
| 9d6d1746c6 | |||
| 559355ce47 | |||
| 7a301685c3 | |||
| 4346eda88d | |||
| a518a307f3 | |||
| eac01c2975 | |||
| e925b219cb | |||
| d198a790c8 | |||
| ee719296c4 | |||
| ccd286132f | |||
| f9b5a504e5 | |||
| 0b2c0dd8d7 | |||
| ac31e4112f | |||
| 449335df04 | |||
| b73a83e612 | |||
| 7a609cae69 | |||
| 4849ee2b8c | |||
| 8fb75cc7e2 | |||
| 659f0c91f3 | |||
| 9e56245091 | |||
| ff1b2cbce0 | |||
| d31685cd7d | |||
| 507154f88d | |||
| 074b276293 | |||
| add0137f72 | |||
| 04a991ef7e | |||
| 23c0f0a15a | |||
| 948efbb376 | |||
| be249fbcb4 | |||
| 7d521239ac | |||
| 8b7588323e | |||
| 4e9c47f081 | |||
| ff98a63450 | |||
| bd2a79c090 | |||
| 3f4dc1ae03 | |||
| 10fbfd0f54 | |||
| 9a66b7697e | |||
| b9b90ba9e7 | |||
| 4374b91fd1 | |||
| a664dfbbec | |||
| 1933fcfb40 | |||
| d343066435 | |||
| 91693a5168 | |||
| 732f3d4e13 | |||
| e950601e28 | |||
| 18e6fab307 | |||
| a70680b2a2 | |||
| cbe359b1a5 | |||
| d030897520 | |||
| f2b29a06d5 | |||
| 95cac4e831 | |||
| 3a2856b27d | |||
| 7bbc484053 | |||
| 45b88728f3 | |||
| 0ec372051a | |||
| 75bf912f60 | |||
| 1b3ff232c4 | |||
| f0c1af986d | |||
| 74dcd89ec5 | |||
| d82c7686f7 | |||
| 8abf5e07b9 | |||
| e596a1407f | |||
| c23966061c | |||
| 56025a84e9 | |||
| e0b9ab997a | |||
| aea42e82ab | |||
| 6152b63578 | |||
| 26502df891 | |||
| be689ad1e9 | |||
| edae93498d | |||
| 3a6a53d046 | |||
| c2ab18164e | |||
| df74d37fd0 | |||
| 2f2f73cbb3 | |||
| 88712ed328 | |||
| 0d533ec11e | |||
| 95955a2792 | |||
| eea3da805e | |||
| df1c429631 | |||
| 55b8288b98 | |||
| 5e256d1c12 | |||
| 6710b58d25 | |||
| eb64e52134 | |||
| 221374eed6 | |||
| 9c229e14fd | |||
| 678fa89747 | |||
| 25b904b404 | |||
| 32ec14f5c3 | |||
| 4e564aad79 | |||
| da689da4d9 | |||
| dd7e591cb8 | |||
| 794cc2a7f2 | |||
| 9da08e9c42 | |||
| be2a77cc79 | |||
| 00fbf5c44e | |||
| 01953294cd | |||
| 8e7bbe51c8 | |||
| f6e6d418f6 | |||
| 7273e3f718 | |||
| bbcbaecd22 | |||
| 9a27a80d65 | |||
| facfa070bb | |||
| 55c0fd1c52 | |||
| 067cfba7f3 | |||
| 0b2cd324e5 | |||
| 0d7530e33c | |||
| 6ce3ea784d | |||
| c6a04d8833 | |||
| fe1862af85 | |||
| f728274764 | |||
| fcb83e620c | |||
| d030bb6268 | |||
| b6496ac169 | |||
| 94e41d20ff | |||
| 1c78febd16 | |||
| f4dd7af283 | |||
| 1e5b43ebcd | |||
| d187a6c8d9 | |||
| 3ce4fa0c07 | |||
| b762a80482 | |||
| 211000c926 | |||
| 217b0e6d00 | |||
| c0bccce539 | |||
| 93f640dc79 | |||
| 1792107412 | |||
| 147c10d4bb | |||
| 05a8d9d6d6 | |||
| 9b50bfa75e | |||
| 63fd391dff | |||
| 6eb88a4041 | |||
| 28fcaa7eae | |||
| 386e36a92b | |||
| 1491619310 | |||
| 4e0bcd5188 | |||
| d5f056c3d1 | |||
| 33a603c0c5 | |||
| 0b4e197d48 | |||
| 89636eee92 | |||
| 02fc847166 | |||
| b66da31dd0 | |||
| f775659cc5 | |||
| 96e40f056e | |||
| 3f9c6fc6aa | |||
| e60eef5df8 | |||
| fd1e5019ea | |||
| 551e41c27f | |||
| 3378fc51b3 | |||
| 4eb4e8667c | |||
| 743a0e380c | |||
| 1edf3a4b00 | |||
| a3cb12b1eb | |||
| cf3de845fb | |||
| 4a74487e06 | |||
| 05ad580bc1 | |||
| c952d2f67b | |||
| fb80ce8c5a | |||
| 3113e3c103 | |||
| 602f52055c | |||
| 84bbbf2c89 | |||
| e8959bf032 | |||
| 536f8b4f32 | |||
| 760eec208e | |||
| 88edb80f2c | |||
| a77d0e70f2 | |||
| f7cfd6c11b | |||
| b255d4b935 | |||
| 5dc286ffd3 | |||
| bab468fc82 | |||
| 462ed2266a | |||
| 0080ceb397 | |||
| 45abcbb1b9 | |||
| 10c5705748 | |||
| f76054b1df | |||
| 982fbfa1cf | |||
| 25f9edbed1 | |||
| 5c4a195505 | |||
| 40339a1667 | |||
| 8dbd6eaade | |||
| f62bf3113f | |||
| baff5c18d3 | |||
| 2647586286 | |||
| 30574aefd1 | |||
| ae67c93015 | |||
| c409a6d2a3 | |||
| 0c5f8b9bfe | |||
| 4a66f994ee | |||
| 5ea8059812 | |||
| e07e8e5127 | |||
| 5278c05cec | |||
| 67734c92a1 | |||
| a9786d4737 | |||
| 584bff9c06 | |||
| ac55b553b3 | |||
| aaeed92e3a | |||
| 447a701dc4 | |||
| 1198aee36e | |||
| 95c6f1f4b2 | |||
| bdd935ddfd | |||
| 4dd4be4afb | |||
| 46b351e945 | |||
| 4933a007c3 | |||
| b2e900e77d | |||
| 7c44948f33 | |||
| 09df57df2b | |||
| a6c9093961 | |||
| 754fbe5c30 | |||
| 7bed5efe61 | |||
| ba02c8ed12 | |||
| ea84168ada | |||
| 828f728d67 | |||
| 48b2993089 | |||
| 6f1e00b647 | |||
| 95bf1cac7b | |||
| f718c2288b | |||
| 14984c5233 | |||
| fb9ee27b38 | |||
| 2f5cfb2fca | |||
| d4d6e5b9ff | |||
| b92fa9013b | |||
| 188725c412 | |||
| c4c47b8df9 | |||
| 76ee25b299 | |||
| 611c89783f | |||
| 17f179513f | |||
| d6472510ea | |||
| d704816c4d | |||
| 312b0ef48c | |||
| ae9c5fa0e9 | |||
| ad84843d9e | |||
| a9344adb64 | |||
| 2d8ee64314 | |||
| 28155bcee6 | |||
| 450820e8f9 | |||
| 79d462736c | |||
| 9d59a454e0 | |||
| 23db500688 | |||
| a85293ff99 | |||
| ccf07a762b | |||
| 211d03a93f | |||
| ff3245eb2b | |||
| 9f99b77849 | |||
| 3797624cae | |||
| 36988cbea1 | |||
| 0fc8769e17 | |||
| 0006f727d5 | |||
| 3c7e2c0f1d | |||
| 7c5167478b | |||
| fb4b529fa2 | |||
| 579b0041fc | |||
| ede3960afb | |||
| fe338228d2 | |||
| 449c4daee1 | |||
| 4b342265c1 | |||
| 22607b4ed2 | |||
| f68a07e30e | |||
| 2bf55a89c2 | |||
| 9ba8ac2187 | |||
| 5515a72cf3 | |||
| ef3d8b0ec1 | |||
| 874422ecfd | |||
| 57cb63b9c9 | |||
| dbf2962c54 | |||
| f5ef2d850f | |||
| 366cd8ebdd | |||
| cc5074e682 | |||
| 1b49e20c2e | |||
| ddb53b250f | |||
| c6a756e754 | |||
| 712d5a856f | |||
| ece84d4c4f | |||
| 2ab3f101d6 | |||
| 1d8626bc6b | |||
| bd8551d282 | |||
| 6d825e6585 | |||
| 3db6a32e7c | |||
| c19b13e4ac | |||
| 1b9a2ab640 | |||
| 4300a8a963 | |||
| 24b831c712 | |||
| bf873dc110 | |||
| f65542add8 | |||
| 229ebaf238 | |||
| e51194a9be | |||
| 85f8f08f42 | |||
| 70358f8151 | |||
| 064d7ba235 | |||
| 69401365be | |||
| fb1117becc | |||
| df90bad4a1 | |||
| 9f2ed38845 | |||
| 59f4df4475 | |||
| c4da60d1c5 | |||
| 47c4117763 | |||
| 8e63b31508 | |||
| 8bd280efc1 | |||
| 75e1cf84fe | |||
| ba97ccda3c | |||
| 0f04e066ef | |||
| 5e1b965311 | |||
| fdb9b59d36 | |||
| 9c4a72c734 | |||
| 6d16438477 | |||
| bd5dc16715 | |||
| 895004ddc5 | |||
| 76265319a7 | |||
| bfe9ef014d | |||
| d326242667 | |||
| f36d539c36 | |||
| 1d674c3a1e | |||
| 1db5ac57ec | |||
| d8e42a697b | |||
| 050d995660 | |||
| 0c5ac55053 | |||
| 450c17b96e | |||
| 36ab691fbf | |||
| 8cca046d96 | |||
| 22f8943619 | |||
| 5257db5aca | |||
| ebd81586bb | |||
| ae5dd328e1 | |||
| b3cf58adb4 | |||
| 4a4cf8c14b | |||
| e3767d2994 | |||
| c5d54cfae2 | |||
| 975fcde9bd | |||
| 97367fe537 | |||
| 72c898e8c2 | |||
| f8fb58db1f | |||
| c341de5515 | |||
| b1687f4a6b | |||
| 6a35da1eb2 | |||
| 0e06956d63 | |||
| 8448c71287 | |||
| d177c0bf3c | |||
| 040fec3613 | |||
| e757922c72 | |||
| 05cd1b6596 | |||
| e9126b47db | |||
| 0f9f235438 | |||
| f0eb5382fe | |||
| 842bfc407c | |||
| 5ec4283f41 | |||
| a359f19cdc | |||
| 6287f24e51 | |||
| faa37928cd | |||
| 094e729e89 | |||
| ad8c0e208b | |||
| ffeb6f50f5 | |||
| 58594e03df | |||
| da28d839f6 | |||
| 075d760721 | |||
| 2da1ef38af | |||
| 40fc35f176 | |||
| 1a428e3c6a | |||
| 66f728e7a3 | |||
| eaaf09dc3c | |||
| abc0639602 | |||
| b792e34a64 | |||
| 8caebbd226 | |||
| 2dd6145bd8 | |||
| 0c27aa6c6b | |||
| e24664c7b2 | |||
| 20ebab55a0 | |||
| c44026c06c | |||
| 776f4e4370 | |||
| cd3f3c89ed | |||
| 93e72b5530 | |||
| 637946b8c6 | |||
| 6677a6e55b | |||
| be20d80453 | |||
| db251a1038 | |||
| 28ab543d4a | |||
| 8ba5ed4d90 | |||
| 79ebc210bf | |||
| edc09895b3 | |||
| 4628813363 | |||
| d535fc7f38 | |||
| b415e4ec19 | |||
| 0535e436d5 | |||
| f1f3ed9925 | |||
| d804a32c0e | |||
| 8a056468de | |||
| 7aa9fe6099 | |||
| b91e72b749 | |||
| 8ccc3d60b5 | |||
| 9fdece9404 | |||
| 85fad6bb04 | |||
| 182a19716e | |||
| 161a4d062a | |||
| e783a03f74 | |||
| c2f4b161b4 | |||
| 2a35df9cbe | |||
| cc6a35ea05 | |||
| 7c45d26bea | |||
| 555cf29890 | |||
| 0625fe10c8 | |||
| 30d838c3a0 | |||
| 0b148325d0 | |||
| b92f2f32c8 | |||
| 3e9d362be3 | |||
| 4105f6154a | |||
| 9ec5ff309a | |||
| 932194d6fa | |||
| f5c9596b05 | |||
| 6917f708b3 | |||
| cdd06d4339 | |||
| e19e9130e4 | |||
| 5c7fd39249 | |||
| f9df7d4479 | |||
| 7fe117d357 | |||
| 3487c79cba | |||
| e3b483d983 | |||
| 2d22bd7b9c | |||
| 76582c821e | |||
| e47ee14c7b | |||
| e747a783a5 | |||
| 84f05079e3 | |||
| c35170786b | |||
| a52f3a2ef8 | |||
| 2668f88e8a | |||
| ac51ded52b | |||
| f10a2f2ffa | |||
| c61fcc6333 | |||
| 8aa70e287f | |||
| 27eb9bef95 | |||
| 56e275245f | |||
| eb9705bd93 | |||
| 10ca40dd35 | |||
| b575dcd1eb | |||
| f7d3e97f18 | |||
| 94b4f38c8c | |||
| 9c60936a0c | |||
| c7c8b89b4e | |||
| cf19530792 | |||
| f4a9ff82fa | |||
| 926cebe40a | |||
| f17c9e31b4 | |||
| 1b8b236433 | |||
| 2ec1ecfd50 | |||
| a70e4e2b21 | |||
| ce75f0e5a1 | |||
| 76e263c0c9 | |||
| bb4776e99c | |||
| dc64493f42 | |||
| 0070f61a40 | |||
| d3ca0fee98 | |||
| eaf229e144 | |||
| d7281dc16e | |||
| ef29902963 | |||
| 0d09007dc1 | |||
| 5f9bc193cb | |||
| 03db4190d7 | |||
| d9d056c80d | |||
| a65990f72b | |||
| 2bc7a3f0a5 | |||
| bf76a763c3 | |||
| 44c2585f95 | |||
| bd7ccf3a07 | |||
| 1306163446 | |||
| ddf6f0e1bc | |||
| d53f0e44ee | |||
| fb018e1291 | |||
| a7639fe24e | |||
| 1ac6eb9b7f | |||
| d042fa95e2 | |||
| 92aa33c6d3 | |||
| 1677d25298 | |||
| 9c5fcab9e8 | |||
| a88311b9fe | |||
| ccdba69214 | |||
| 94fe904d3f | |||
| 9e6b740950 | |||
| e34ff7ef79 | |||
| 4479c38395 | |||
| 243a0cc5ca | |||
| 68e895cb8a | |||
| b4734f4bba | |||
| 8a3c2d8e21 | |||
| 73fad80257 | |||
| 17eebff5f8 | |||
| 1581380a43 | |||
| 8bf95866dc |
101
.claude/commands/conductor-implement.md
Normal file
101
.claude/commands/conductor-implement.md
Normal file
@@ -0,0 +1,101 @@
|
||||
---
|
||||
description: Execute a conductor track — follow TDD workflow, delegate to Tier 3/4 workers
|
||||
---
|
||||
|
||||
# /conductor-implement
|
||||
|
||||
Execute a track's implementation plan. This is a Tier 2 (Tech Lead) operation.
|
||||
You maintain PERSISTENT context throughout the track — do NOT lose state.
|
||||
|
||||
## Startup
|
||||
|
||||
1. Read `conductor/workflow.md` for the full task lifecycle protocol
|
||||
2. Read `conductor/tech-stack.md` for technology constraints
|
||||
3. Read the target track's `spec.md` and `plan.md`
|
||||
4. Identify the current task: first `[ ]` or `[~]` in `plan.md`
|
||||
|
||||
If no track name is provided, run `/conductor-status` first and ask which track to implement.
|
||||
|
||||
## Task Lifecycle (per task)
|
||||
|
||||
Follow this EXACTLY per `conductor/workflow.md`:
|
||||
|
||||
### 1. Mark In Progress
|
||||
Edit `plan.md`: change `[ ]` → `[~]` for the current task.
|
||||
|
||||
### 2. Research Phase (High-Signal)
|
||||
Before touching code, use context-efficient tools IN THIS ORDER:
|
||||
1. `py_get_code_outline` — FIRST call on any Python file. Maps functions/classes with line ranges.
|
||||
2. `py_get_skeleton` — signatures + docstrings only, no bodies
|
||||
3. `get_git_diff` — understand recent changes before modifying touched files
|
||||
4. `Grep`/`Glob` — cross-file symbol search
|
||||
5. `Read` (targeted, offset+limit only) — ONLY after outline identifies specific ranges
|
||||
|
||||
**NEVER** call `Read` on a full Python file >50 lines without a prior `py_get_code_outline` call.
|
||||
|
||||
### 3. Write Failing Tests (Red Phase — TDD)
|
||||
**DELEGATE to Tier 3 Worker** — do NOT write tests yourself:
|
||||
```powershell
|
||||
uv run python scripts\claude_mma_exec.py --role tier3-worker "Write failing tests for: {TASK_DESCRIPTION}. Focus files: {FILE_LIST}. Spec: {RELEVANT_SPEC_EXCERPT}"
|
||||
```
|
||||
Run the tests. Confirm they FAIL. This is the Red phase.
|
||||
|
||||
### 4. Implement to Pass (Green Phase)
|
||||
**DELEGATE to Tier 3 Worker**:
|
||||
```powershell
|
||||
uv run python scripts\claude_mma_exec.py --role tier3-worker "Implement minimum code to pass these tests: {TEST_FILE}. Focus files: {FILE_LIST}"
|
||||
```
|
||||
Run tests. Confirm they PASS. This is the Green phase.
|
||||
|
||||
### 5. Refactor (Optional)
|
||||
With passing tests as safety net, refactor if needed. Rerun tests.
|
||||
|
||||
### 6. Verify Coverage
|
||||
Use `run_powershell` MCP tool (not Bash — Bash is a mingw sandbox on Windows):
|
||||
```powershell
|
||||
uv run pytest --cov=. --cov-report=term-missing {TEST_FILE}
|
||||
```
|
||||
Target: >80% for new code.
|
||||
|
||||
### 7. Commit
|
||||
Stage changes. Message format:
|
||||
```
|
||||
feat({scope}): {description}
|
||||
```
|
||||
|
||||
### 8. Attach Git Notes
|
||||
```powershell
|
||||
$sha = git log -1 --format="%H"
|
||||
git notes add -m "Task: {TASK_NAME}`nSummary: {CHANGES}`nFiles: {FILE_LIST}" $sha
|
||||
```
|
||||
|
||||
### 9. Update plan.md
|
||||
Change `[~]` → `[x]` and append first 7 chars of commit SHA:
|
||||
```
|
||||
[x] Task description. abc1234
|
||||
```
|
||||
Commit: `conductor(plan): Mark task '{TASK_NAME}' as complete`
|
||||
|
||||
### 10. Next Task or Phase Completion
|
||||
- If more tasks in current phase: loop to step 1 with next task
|
||||
- If phase complete: run `/conductor-verify`
|
||||
|
||||
## Error Handling
|
||||
If tests fail with large output, delegate to Tier 4 QA:
|
||||
```powershell
|
||||
uv run python scripts\claude_mma_exec.py --role tier4-qa "Analyze this test failure: {ERROR_SUMMARY}. Test file: {TEST_FILE}"
|
||||
```
|
||||
Maximum 2 fix attempts. If still failing: STOP and ask the user.
|
||||
|
||||
## Deviations from Tech Stack
|
||||
If implementation requires something not in `tech-stack.md`:
|
||||
1. **STOP** implementation
|
||||
2. Update `tech-stack.md` with justification
|
||||
3. Add dated note
|
||||
4. Resume
|
||||
|
||||
## Important
|
||||
- You are Tier 2 — delegate heavy implementation to Tier 3
|
||||
- Maintain persistent context across the entire track
|
||||
- Use Research-First Protocol before reading large files
|
||||
- The plan.md is the SOURCE OF TRUTH for task state
|
||||
100
.claude/commands/conductor-new-track.md
Normal file
100
.claude/commands/conductor-new-track.md
Normal file
@@ -0,0 +1,100 @@
|
||||
---
|
||||
description: Initialize a new conductor track with spec, plan, and metadata
|
||||
---
|
||||
|
||||
# /conductor-new-track
|
||||
|
||||
Create a new track in the conductor system. This is a Tier 1 (Orchestrator) operation.
|
||||
|
||||
## Prerequisites
|
||||
- Read `conductor/product.md` and `conductor/product-guidelines.md` for product alignment
|
||||
- Read `conductor/tech-stack.md` for technology constraints
|
||||
|
||||
## Steps
|
||||
|
||||
### 1. Gather Information
|
||||
Ask the user for:
|
||||
- **Track name**: descriptive, snake_case (e.g., `add_auth_system`)
|
||||
- **Track type**: `feat`, `fix`, `refactor`, `chore`
|
||||
- **Description**: one-line summary
|
||||
- **Requirements**: functional requirements for the spec
|
||||
|
||||
### 2. Create Track Directory
|
||||
```
|
||||
conductor/tracks/{track_name}_{YYYYMMDD}/
|
||||
```
|
||||
Use today's date in YYYYMMDD format.
|
||||
|
||||
### 3. Create metadata.json
|
||||
```json
|
||||
{
|
||||
"track_id": "{track_name}_{YYYYMMDD}",
|
||||
"type": "{feat|fix|refactor|chore}",
|
||||
"status": "new",
|
||||
"created_at": "{ISO8601}",
|
||||
"updated_at": "{ISO8601}",
|
||||
"description": "{description}"
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Create index.md
|
||||
```markdown
|
||||
# Track: {Track Title}
|
||||
|
||||
- [Specification](spec.md)
|
||||
- [Implementation Plan](plan.md)
|
||||
```
|
||||
|
||||
### 5. Create spec.md
|
||||
```markdown
|
||||
# {Track Title} — Specification
|
||||
|
||||
## Overview
|
||||
{Description of what this track delivers}
|
||||
|
||||
## Functional Requirements
|
||||
1. {Requirement from user input}
|
||||
2. ...
|
||||
|
||||
## Non-Functional Requirements
|
||||
- Performance: {if applicable}
|
||||
- Testing: >80% coverage for new code
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] {Criterion 1}
|
||||
- [ ] {Criterion 2}
|
||||
|
||||
## Out of Scope
|
||||
- {Explicitly excluded items}
|
||||
|
||||
## Context
|
||||
- Tech stack: see `conductor/tech-stack.md`
|
||||
- Product guidelines: see `conductor/product-guidelines.md`
|
||||
```
|
||||
|
||||
### 6. Create plan.md
|
||||
```markdown
|
||||
# {Track Title} — Implementation Plan
|
||||
|
||||
## Phase 1: {Phase Name}
|
||||
- [ ] Task: {Description}
|
||||
- [ ] Task: {Description}
|
||||
|
||||
## Phase 2: {Phase Name}
|
||||
- [ ] Task: {Description}
|
||||
```
|
||||
|
||||
Break requirements into phases with 2-5 tasks each. Each task should be a single atomic unit of work suitable for a Tier 3 Worker.
|
||||
|
||||
### 7. Update Track Registry
|
||||
If `conductor/tracks.md` exists, add the new track entry.
|
||||
|
||||
### 8. Commit
|
||||
```
|
||||
conductor(track): Initialize track '{track_name}'
|
||||
```
|
||||
|
||||
## Important
|
||||
- Do NOT start implementing — track initialization only
|
||||
- Implementation is done via `/conductor-implement`
|
||||
- Each task should be scoped for a single Tier 3 Worker delegation
|
||||
46
.claude/commands/conductor-setup.md
Normal file
46
.claude/commands/conductor-setup.md
Normal file
@@ -0,0 +1,46 @@
|
||||
---
|
||||
description: Initialize conductor context — read product docs, verify structure, report readiness
|
||||
---
|
||||
|
||||
# /conductor-setup
|
||||
|
||||
Bootstrap a Claude Code session with full conductor context. Run this at session start.
|
||||
|
||||
## Steps
|
||||
|
||||
1. **Read Core Documents:**
|
||||
- `conductor/index.md` — navigation hub
|
||||
- `conductor/product.md` — product vision
|
||||
- `conductor/product-guidelines.md` — UX/code standards
|
||||
- `conductor/tech-stack.md` — technology constraints
|
||||
- `conductor/workflow.md` — task lifecycle (skim; reference during implementation)
|
||||
|
||||
2. **Check Active Tracks:**
|
||||
- List all directories in `conductor/tracks/`
|
||||
- Read each `metadata.json` for status
|
||||
- Read each `plan.md` for current task state
|
||||
- Identify the track with `[~]` in-progress tasks
|
||||
|
||||
3. **Check Session Context:**
|
||||
- Read `TASKS.md` if it exists — check for IN_PROGRESS or BLOCKED tasks
|
||||
- Read last 3 entries in `JOURNAL.md` for recent activity
|
||||
- Run `git log --oneline -10` for recent commits
|
||||
|
||||
4. **Report Readiness:**
|
||||
Present a session startup summary:
|
||||
```
|
||||
## Session Ready
|
||||
|
||||
**Active Track:** {track name} — Phase {N}, Task: {current task description}
|
||||
**Recent Activity:** {last journal entry title}
|
||||
**Last Commit:** {git log -1 oneline}
|
||||
|
||||
Ready to:
|
||||
- `/conductor-implement` — resume active track
|
||||
- `/conductor-status` — full status overview
|
||||
- `/conductor-new-track` — start new work
|
||||
```
|
||||
|
||||
## Important
|
||||
- This is READ-ONLY — do not modify files
|
||||
- This replaces Gemini's `activate_skill mma-orchestrator` + `/conductor:setup`
|
||||
32
.claude/commands/conductor-status.md
Normal file
32
.claude/commands/conductor-status.md
Normal file
@@ -0,0 +1,32 @@
|
||||
---
|
||||
description: Show current conductor track status — active tracks, phases, pending tasks
|
||||
---
|
||||
|
||||
# /conductor-status
|
||||
|
||||
Read the conductor track registry and all active tracks, then report current project state.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Read `conductor/tracks.md` for the track registry
|
||||
2. For each track directory in `conductor/tracks/`:
|
||||
- Read `metadata.json` for status
|
||||
- Read `plan.md` and count: total tasks, completed `[x]`, in-progress `[~]`, pending `[ ]`
|
||||
- Identify the current phase (first phase with `[~]` or `[ ]` tasks)
|
||||
3. Read `JOURNAL.md` last 3 entries for recent activity context
|
||||
|
||||
## Output Format
|
||||
|
||||
Present a summary table:
|
||||
|
||||
```
|
||||
| Track | Status | Phase | Progress | Last SHA |
|
||||
|-------|--------|-------|----------|----------|
|
||||
```
|
||||
|
||||
Then for each in-progress track, list the specific next pending task.
|
||||
|
||||
## Important
|
||||
- This is READ-ONLY — do not modify any files
|
||||
- Report exactly what the plan.md files say
|
||||
- Flag any discrepancies (e.g., metadata says "new" but plan.md has [x] tasks)
|
||||
85
.claude/commands/conductor-verify.md
Normal file
85
.claude/commands/conductor-verify.md
Normal file
@@ -0,0 +1,85 @@
|
||||
---
|
||||
description: Run phase completion verification — tests, coverage, checkpoint commit
|
||||
---
|
||||
|
||||
# /conductor-verify
|
||||
|
||||
Execute the Phase Completion Verification and Checkpointing Protocol.
|
||||
Run this when all tasks in a phase are marked `[x]`.
|
||||
|
||||
## Protocol
|
||||
|
||||
### 1. Announce
|
||||
Tell the user: "Phase complete. Running verification and checkpointing protocol."
|
||||
|
||||
### 2. Verify Test Coverage for Phase
|
||||
|
||||
Find the phase scope:
|
||||
- Read `plan.md` to find the previous phase's checkpoint SHA
|
||||
- If no previous checkpoint: scope is all changes since first commit
|
||||
- Run: `git diff --name-only {previous_checkpoint_sha} HEAD`
|
||||
- For each changed code file (exclude `.json`, `.md`, `.yaml`, `.toml`):
|
||||
- Check if a corresponding test file exists
|
||||
- If missing: create one (analyze existing test style first)
|
||||
|
||||
### 3. Run Automated Tests
|
||||
|
||||
**ANNOUNCE the exact command before running:**
|
||||
> "I will now run the automated test suite. Command: `uv run pytest --cov=. --cov-report=term-missing -x`"
|
||||
|
||||
Execute the command.
|
||||
|
||||
**If tests fail with large output:**
|
||||
- Pipe output to `logs/phase_verify.log`
|
||||
- Spawn Tier 4 QA for analysis:
|
||||
```powershell
|
||||
uv run python scripts\claude_mma_exec.py --role tier4-qa "Analyze test failures from logs/phase_verify.log"
|
||||
```
|
||||
- Maximum 2 fix attempts
|
||||
- If still failing: **STOP**, report to user, await guidance
|
||||
|
||||
### 4. API Hook Verification (if applicable)
|
||||
|
||||
If the track involves UI changes:
|
||||
- Check if GUI test hooks are available on port 8999
|
||||
- Run relevant simulation tests from `tests/visual_sim_*.py`
|
||||
- Log results
|
||||
|
||||
### 5. Present Results and WAIT
|
||||
|
||||
Display:
|
||||
- Test results (pass/fail count)
|
||||
- Coverage report
|
||||
- Any verification logs
|
||||
|
||||
**PAUSE HERE.** Do NOT proceed without explicit user confirmation.
|
||||
|
||||
### 6. Create Checkpoint Commit
|
||||
|
||||
After user confirms:
|
||||
```powershell
|
||||
git add -A
|
||||
git commit -m "conductor(checkpoint): Checkpoint end of Phase {N} - {Phase Name}"
|
||||
```
|
||||
|
||||
### 7. Attach Verification Report via Git Notes
|
||||
```powershell
|
||||
$sha = git log -1 --format="%H"
|
||||
git notes add -m "Phase Verification Report`nCommand: {test_command}`nResult: {pass/fail}`nCoverage: {percentage}`nConfirmed by: user" $sha
|
||||
```
|
||||
|
||||
### 8. Update plan.md
|
||||
|
||||
Update the phase heading to include checkpoint SHA:
|
||||
```markdown
|
||||
## Phase N: {Name} [checkpoint: {sha_7}]
|
||||
```
|
||||
Commit: `conductor(plan): Mark phase '{Phase Name}' as complete`
|
||||
|
||||
### 9. Announce Completion
|
||||
Tell the user the phase is complete with a summary of the verification report.
|
||||
|
||||
## Context Reset
|
||||
After phase checkpointing, treat the checkpoint as ground truth.
|
||||
Prior conversational context about implementation details can be dropped.
|
||||
The checkpoint commit and git notes preserve the audit trail.
|
||||
25
.claude/commands/mma-tier1-orchestrator.md
Normal file
25
.claude/commands/mma-tier1-orchestrator.md
Normal file
@@ -0,0 +1,25 @@
|
||||
---
|
||||
description: Tier 1 Orchestrator — product alignment, high-level planning, track initialization
|
||||
---
|
||||
|
||||
STRICT SYSTEM DIRECTIVE: You are a Tier 1 Orchestrator. Focused on product alignment, high-level planning, and track initialization. ONLY output the requested text. No pleasantries.
|
||||
|
||||
# MMA Tier 1: Orchestrator
|
||||
|
||||
## Primary Context Documents
|
||||
Read at session start: `conductor/product.md`, `conductor/product-guidelines.md`
|
||||
|
||||
## Responsibilities
|
||||
- Maintain alignment with the product guidelines and definition
|
||||
- Define track boundaries and initialize new tracks (`/conductor:newTrack`)
|
||||
- Set up the project environment (`/conductor:setup`)
|
||||
- Delegate track execution to the Tier 2 Tech Lead
|
||||
|
||||
## Limitations
|
||||
- Read-only tools only: Read, Glob, Grep, WebFetch, WebSearch, Bash (read-only ops)
|
||||
- Do NOT execute tracks or implement features
|
||||
- Do NOT write code or edit files
|
||||
- Do NOT perform low-level bug fixing
|
||||
- Keep context strictly focused on product definitions and high-level strategy
|
||||
- To delegate track execution: instruct the human operator to run:
|
||||
`uv run python scripts\claude_mma_exec.py --role tier2-tech-lead "[PROMPT]"`
|
||||
72
.claude/commands/mma-tier2-tech-lead.md
Normal file
72
.claude/commands/mma-tier2-tech-lead.md
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
description: Tier 2 Tech Lead — track execution, architectural oversight, delegation to Tier 3/4
|
||||
---
|
||||
|
||||
STRICT SYSTEM DIRECTIVE: You are a Tier 2 Tech Lead. Focused on architectural design and track execution. ONLY output the requested text. No pleasantries.
|
||||
|
||||
# MMA Tier 2: Tech Lead
|
||||
|
||||
## Primary Context Documents
|
||||
Read at session start: `conductor/tech-stack.md`, `conductor/workflow.md`
|
||||
|
||||
## Responsibilities
|
||||
- Manage the execution of implementation tracks (`/conductor-implement`)
|
||||
- Ensure alignment with `tech-stack.md` and project architecture
|
||||
- Break down tasks into specific technical steps for Tier 3 Workers
|
||||
- Maintain PERSISTENT context throughout a track's implementation phase (NO Context Amnesia)
|
||||
- Review implementations and coordinate bug fixes via Tier 4 QA
|
||||
|
||||
## Delegation Commands (PowerShell)
|
||||
|
||||
```powershell
|
||||
# Spawn Tier 3 Worker for implementation tasks
|
||||
uv run python scripts\claude_mma_exec.py --role tier3-worker "[PROMPT]"
|
||||
|
||||
# Spawn Tier 4 QA Agent for error analysis
|
||||
uv run python scripts\claude_mma_exec.py --role tier4-qa "[PROMPT]"
|
||||
```
|
||||
|
||||
### @file Syntax for Tier 3 Context Injection
|
||||
`@filepath` anywhere in the prompt string is detected by `claude_mma_exec.py` and the file is automatically inlined into the Tier 3 context. Use this so Tier 3 has what it needs WITHOUT Tier 2 reading those files first.
|
||||
|
||||
```powershell
|
||||
# Example: Tier 3 gets api_hook_client.py and the styleguide injected automatically
|
||||
uv run python scripts\claude_mma_exec.py --role tier3-worker "Apply type hints to @api_hook_client.py following @conductor/code_styleguides/python.md. ..."
|
||||
```
|
||||
|
||||
## Tool Use Hierarchy (MANDATORY — enforced order)
|
||||
|
||||
Claude has access to all tools and will default to familiar ones. This hierarchy OVERRIDES that default.
|
||||
|
||||
**For any Python file investigation, use in this order:**
|
||||
1. `py_get_code_outline` — structure map (functions, classes, line ranges). Use this FIRST.
|
||||
2. `py_get_skeleton` — signatures + docstrings, no bodies
|
||||
3. `get_file_summary` — high-level prose summary
|
||||
4. `py_get_definition` / `py_get_signature` — targeted symbol lookup
|
||||
5. `Grep` / `Glob` — cross-file symbol search and pattern matching
|
||||
6. `Read` (targeted, with offset/limit) — ONLY after outline identifies specific line ranges
|
||||
|
||||
**`run_powershell` (MCP tool)** — PRIMARY shell execution on Windows. Use for: git, tests, scan scripts, any shell command. This is native PowerShell, not bash/mingw.
|
||||
|
||||
**Bash** — LAST RESORT only when MCP server is not running. Bash runs in a mingw sandbox on Windows and may produce no output. Prefer `run_powershell` for everything.
|
||||
|
||||
## Hard Rules (Non-Negotiable)
|
||||
|
||||
- **NEVER** call `Read` on a file >50 lines without calling `py_get_code_outline` or `py_get_skeleton` first.
|
||||
- **NEVER** write implementation code, refactor code, type hint code, or test code inline in this context. If it goes into the codebase, Tier 3 writes it.
|
||||
- **NEVER** write or run inline Python scripts via Bash. If a script is needed, it already exists or Tier 3 creates it.
|
||||
- **NEVER** process raw bash output for large outputs inline — write to a file and Read, or delegate to Tier 4 QA.
|
||||
- **ALWAYS** use `@file` injection in Tier 3 prompts rather than reading and summarizing files yourself.
|
||||
|
||||
## Refactor-Heavy Tracks (Type Hints, Style Sweeps)
|
||||
|
||||
For tracks with no new logic — only mechanical code changes (type hints, style fixes, renames):
|
||||
- **No TDD cycle required.** Skip Red/Green phases. The verification is: scan report shows 0 remaining items.
|
||||
- Tier 2 role: scope the batch, write a precise Tier 3 prompt, delegate, verify with scan script.
|
||||
- Batch by file group. One Tier 3 call per group (e.g., all scripts/, all simulation/).
|
||||
- Verification command: `uv run python scripts\scan_all_hints.py` then read `scan_report.txt`
|
||||
|
||||
## Limitations
|
||||
- Do NOT perform heavy implementation work directly — delegate to Tier 3
|
||||
- Do NOT write test or implementation code directly
|
||||
- For large error logs, always spawn Tier 4 QA rather than reading raw stderr
|
||||
22
.claude/commands/mma-tier3-worker.md
Normal file
22
.claude/commands/mma-tier3-worker.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
description: Tier 3 Worker — stateless TDD implementation, surgical code changes
|
||||
---
|
||||
|
||||
STRICT SYSTEM DIRECTIVE: You are a stateless Tier 3 Worker (Contributor). Your goal is to implement specific code changes or tests based on the provided task. You have access to tools for reading and writing files (Read, Write, Edit), codebase investigation (Glob, Grep), version control (Bash git commands), and web tools (WebFetch, WebSearch). You CAN execute PowerShell scripts via Bash for verification and testing. Follow TDD and return success status or code changes. No pleasantries, no conversational filler.
|
||||
|
||||
# MMA Tier 3: Worker
|
||||
|
||||
## Context Model: Context Amnesia
|
||||
Treat each invocation as starting from zero. Use ONLY what is provided in this prompt plus files you explicitly read during this session. Do not reference prior conversation history.
|
||||
|
||||
## Responsibilities
|
||||
- Implement code strictly according to the provided prompt and specifications
|
||||
- Write failing tests FIRST (Red phase), then implement code to pass them (Green phase)
|
||||
- Ensure all changes are minimal, surgical, and conform to the requested standards
|
||||
- Utilize tool access (Read, Write, Edit, Glob, Grep, Bash) to implement and verify
|
||||
|
||||
## Limitations
|
||||
- No architectural decisions — if ambiguous, pick the minimal correct approach and note the assumption
|
||||
- No modifications to unrelated files beyond the immediate task scope
|
||||
- Stateless — always assume a fresh context per invocation
|
||||
- Rely on dependency skeletons provided in the prompt for understanding module interfaces
|
||||
30
.claude/commands/mma-tier4-qa.md
Normal file
30
.claude/commands/mma-tier4-qa.md
Normal file
@@ -0,0 +1,30 @@
|
||||
---
|
||||
description: Tier 4 QA Agent — stateless error analysis, log summarization, no fixes
|
||||
---
|
||||
|
||||
STRICT SYSTEM DIRECTIVE: You are a stateless Tier 4 QA Agent. Your goal is to analyze errors, summarize logs, or verify tests. Read-only access only. Do NOT implement fixes. Do NOT modify any files. ONLY output the requested analysis. No pleasantries.
|
||||
|
||||
# MMA Tier 4: QA Agent
|
||||
|
||||
## Context Model: Context Amnesia
|
||||
Stateless — treat each invocation as a fresh context. Use only what is provided in this prompt and files you explicitly read.
|
||||
|
||||
## Responsibilities
|
||||
- Compress large stack traces or log files into concise, actionable summaries
|
||||
- Identify the root cause of test failures or runtime errors
|
||||
- Provide a brief, technical description of the required fix (description only — NOT the implementation)
|
||||
- Utilize diagnostic tools (Read, Glob, Grep, Bash read-only) to verify failures
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
ROOT CAUSE: [one sentence]
|
||||
AFFECTED FILE: [path:line if identifiable]
|
||||
RECOMMENDED FIX: [one sentence description for Tier 2 to action]
|
||||
```
|
||||
|
||||
## Limitations
|
||||
- Do NOT implement the fix directly
|
||||
- Do NOT write or modify any files
|
||||
- Ensure output is extremely brief and focused
|
||||
- Always operate statelessly — assume fresh context each invocation
|
||||
3
.claude/settings.json
Normal file
3
.claude/settings.json
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"outputStyle": "default"
|
||||
}
|
||||
28
.claude/settings.local.json
Normal file
28
.claude/settings.local.json
Normal file
@@ -0,0 +1,28 @@
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"mcp__manual-slop__run_powershell",
|
||||
"mcp__manual-slop__py_get_definition",
|
||||
"mcp__manual-slop__py_get_code_outline",
|
||||
"mcp__manual-slop__read_file",
|
||||
"mcp__manual-slop__list_directory",
|
||||
"mcp__manual-slop__get_file_summary",
|
||||
"mcp__manual-slop__py_get_skeleton",
|
||||
"mcp__manual-slop__py_get_signature",
|
||||
"mcp__manual-slop__py_get_var_declaration",
|
||||
"mcp__manual-slop__py_get_imports",
|
||||
"mcp__manual-slop__get_file_slice",
|
||||
"mcp__manual-slop__set_file_slice",
|
||||
"mcp__manual-slop__py_set_signature",
|
||||
"mcp__manual-slop__py_set_var_declaration",
|
||||
"mcp__manual-slop__py_check_syntax",
|
||||
"Bash(timeout 120 uv run:*)",
|
||||
"Bash(uv run:*)"
|
||||
]
|
||||
},
|
||||
"enableAllProjectMcpServers": true,
|
||||
"enabledMcpjsonServers": [
|
||||
"manual-slop"
|
||||
],
|
||||
"outputStyle": "default"
|
||||
}
|
||||
21
.dockerignore
Normal file
21
.dockerignore
Normal file
@@ -0,0 +1,21 @@
|
||||
.venv
|
||||
__pycache__
|
||||
*.pyc
|
||||
*.pyo
|
||||
*.pyd
|
||||
.git
|
||||
.gitignore
|
||||
logs
|
||||
gallery
|
||||
md_gen
|
||||
credentials.toml
|
||||
manual_slop.toml
|
||||
manual_slop_history.toml
|
||||
manualslop_layout.ini
|
||||
dpg_layout.ini
|
||||
.pytest_cache
|
||||
scripts/generated
|
||||
.gemini
|
||||
conductor/archive
|
||||
.editorconfig
|
||||
*.log
|
||||
@@ -2,7 +2,7 @@ root = true
|
||||
|
||||
[*.py]
|
||||
indent_style = space
|
||||
indent_size = 2
|
||||
indent_size = 1
|
||||
|
||||
[*.s]
|
||||
indent_style = tab
|
||||
|
||||
27
.gemini/agents/tier1-orchestrator.md
Normal file
27
.gemini/agents/tier1-orchestrator.md
Normal file
@@ -0,0 +1,27 @@
|
||||
---
|
||||
name: tier1-orchestrator
|
||||
description: Tier 1 Orchestrator for product alignment and high-level planning.
|
||||
model: gemini-3.1-pro-preview
|
||||
tools:
|
||||
- read_file
|
||||
- list_directory
|
||||
- discovered_tool_search_files
|
||||
- grep_search
|
||||
- discovered_tool_get_file_summary
|
||||
- discovered_tool_get_python_skeleton
|
||||
- discovered_tool_get_code_outline
|
||||
- discovered_tool_get_git_diff
|
||||
- discovered_tool_web_search
|
||||
- discovered_tool_fetch_url
|
||||
- activate_skill
|
||||
- discovered_tool_run_powershell
|
||||
- discovered_tool_py_find_usages
|
||||
- discovered_tool_py_get_imports
|
||||
- discovered_tool_py_check_syntax
|
||||
- discovered_tool_py_get_hierarchy
|
||||
- discovered_tool_py_get_docstring
|
||||
- discovered_tool_get_tree
|
||||
---
|
||||
STRICT SYSTEM DIRECTIVE: You are a Tier 1 Orchestrator.
|
||||
Focused on product alignment, high-level planning, and track initialization.
|
||||
ONLY output the requested text. No pleasantries.
|
||||
29
.gemini/agents/tier2-tech-lead.md
Normal file
29
.gemini/agents/tier2-tech-lead.md
Normal file
@@ -0,0 +1,29 @@
|
||||
---
|
||||
name: tier2-tech-lead
|
||||
description: Tier 2 Tech Lead for architectural design and execution.
|
||||
model: gemini-3-flash-preview
|
||||
tools:
|
||||
- read_file
|
||||
- write_file
|
||||
- replace
|
||||
- list_directory
|
||||
- discovered_tool_search_files
|
||||
- grep_search
|
||||
- discovered_tool_get_file_summary
|
||||
- discovered_tool_get_python_skeleton
|
||||
- discovered_tool_get_code_outline
|
||||
- discovered_tool_get_git_diff
|
||||
- discovered_tool_web_search
|
||||
- discovered_tool_fetch_url
|
||||
- activate_skill
|
||||
- discovered_tool_run_powershell
|
||||
- discovered_tool_py_find_usages
|
||||
- discovered_tool_py_get_imports
|
||||
- discovered_tool_py_check_syntax
|
||||
- discovered_tool_py_get_hierarchy
|
||||
- discovered_tool_py_get_docstring
|
||||
- discovered_tool_get_tree
|
||||
---
|
||||
STRICT SYSTEM DIRECTIVE: You are a Tier 2 Tech Lead.
|
||||
Focused on architectural design and track execution.
|
||||
ONLY output the requested text. No pleasantries.
|
||||
31
.gemini/agents/tier3-worker.md
Normal file
31
.gemini/agents/tier3-worker.md
Normal file
@@ -0,0 +1,31 @@
|
||||
---
|
||||
name: tier3-worker
|
||||
description: Stateless Tier 3 Worker for code implementation and TDD.
|
||||
model: gemini-3-flash-preview
|
||||
tools:
|
||||
- read_file
|
||||
- write_file
|
||||
- replace
|
||||
- list_directory
|
||||
- discovered_tool_search_files
|
||||
- grep_search
|
||||
- discovered_tool_get_file_summary
|
||||
- discovered_tool_get_python_skeleton
|
||||
- discovered_tool_get_code_outline
|
||||
- discovered_tool_get_git_diff
|
||||
- discovered_tool_web_search
|
||||
- discovered_tool_fetch_url
|
||||
- activate_skill
|
||||
- discovered_tool_run_powershell
|
||||
- discovered_tool_py_find_usages
|
||||
- discovered_tool_py_get_imports
|
||||
- discovered_tool_py_check_syntax
|
||||
- discovered_tool_py_get_hierarchy
|
||||
- discovered_tool_py_get_docstring
|
||||
- discovered_tool_get_tree
|
||||
---
|
||||
STRICT SYSTEM DIRECTIVE: You are a stateless Tier 3 Worker (Contributor).
|
||||
Your goal is to implement specific code changes or tests based on the provided task.
|
||||
You have access to tools for reading and writing files, codebase investigation, and web tools.
|
||||
You CAN execute PowerShell scripts or run shell commands via discovered_tool_run_powershell for verification and testing.
|
||||
Follow TDD and return success status or code changes. No pleasantries, no conversational filler.
|
||||
29
.gemini/agents/tier4-qa.md
Normal file
29
.gemini/agents/tier4-qa.md
Normal file
@@ -0,0 +1,29 @@
|
||||
---
|
||||
name: tier4-qa
|
||||
description: Stateless Tier 4 QA Agent for log analysis and diagnostics.
|
||||
model: gemini-2.5-flash-lite
|
||||
tools:
|
||||
- read_file
|
||||
- list_directory
|
||||
- discovered_tool_search_files
|
||||
- grep_search
|
||||
- discovered_tool_get_file_summary
|
||||
- discovered_tool_get_python_skeleton
|
||||
- discovered_tool_get_code_outline
|
||||
- discovered_tool_get_git_diff
|
||||
- discovered_tool_web_search
|
||||
- discovered_tool_fetch_url
|
||||
- activate_skill
|
||||
- discovered_tool_run_powershell
|
||||
- discovered_tool_py_find_usages
|
||||
- discovered_tool_py_get_imports
|
||||
- discovered_tool_py_check_syntax
|
||||
- discovered_tool_py_get_hierarchy
|
||||
- discovered_tool_py_get_docstring
|
||||
- discovered_tool_get_tree
|
||||
---
|
||||
STRICT SYSTEM DIRECTIVE: You are a stateless Tier 4 QA Agent.
|
||||
Your goal is to analyze errors, summarize logs, or verify tests.
|
||||
You have access to tools for reading files, exploring the codebase, and web tools.
|
||||
You CAN execute PowerShell scripts or run shell commands via discovered_tool_run_powershell for diagnostics.
|
||||
ONLY output the requested analysis. No pleasantries.
|
||||
269
.gemini/policies/99-agent-full-autonomy.toml
Normal file
269
.gemini/policies/99-agent-full-autonomy.toml
Normal file
@@ -0,0 +1,269 @@
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_fetch_url"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered fetch_url tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_get_file_slice"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered get_file_slice tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_get_file_summary"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered get_file_summary tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_get_git_diff"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered get_git_diff tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_get_tree"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered get_tree tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_get_ui_performance"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered get_ui_performance tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_list_directory"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered list_directory tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_py_check_syntax"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered py_check_syntax tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_py_find_usages"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered py_find_usages tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_py_get_class_summary"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered py_get_class_summary tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_py_get_code_outline"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered py_get_code_outline tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_py_get_definition"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered py_get_definition tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_py_get_docstring"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered py_get_docstring tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_py_get_hierarchy"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered py_get_hierarchy tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_py_get_imports"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered py_get_imports tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_py_get_signature"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered py_get_signature tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_py_get_skeleton"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered py_get_skeleton tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_py_get_var_declaration"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered py_get_var_declaration tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_py_set_signature"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered py_set_signature tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_py_set_var_declaration"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered py_set_var_declaration tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_py_update_definition"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered py_update_definition tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_read_file"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered read_file tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_run_powershell"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered run_powershell tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_search_files"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered search_files tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_set_file_slice"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered set_file_slice tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "discovered_tool_web_search"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow discovered web_search tool."
|
||||
|
||||
[[rule]]
|
||||
toolName = "run_powershell"
|
||||
decision = "allow"
|
||||
priority = 100
|
||||
description = "Allow the base run_powershell tool with maximum priority."
|
||||
|
||||
[[rule]]
|
||||
toolName = "activate_skill"
|
||||
decision = "allow"
|
||||
priority = 990
|
||||
description = "Allow activate_skill."
|
||||
|
||||
[[rule]]
|
||||
toolName = "ask_user"
|
||||
decision = "ask_user"
|
||||
priority = 990
|
||||
description = "Allow ask_user."
|
||||
|
||||
[[rule]]
|
||||
toolName = "cli_help"
|
||||
decision = "allow"
|
||||
priority = 990
|
||||
description = "Allow cli_help."
|
||||
|
||||
[[rule]]
|
||||
toolName = "codebase_investigator"
|
||||
decision = "allow"
|
||||
priority = 990
|
||||
description = "Allow codebase_investigator."
|
||||
|
||||
[[rule]]
|
||||
toolName = "replace"
|
||||
decision = "allow"
|
||||
priority = 990
|
||||
description = "Allow replace."
|
||||
|
||||
[[rule]]
|
||||
toolName = "glob"
|
||||
decision = "allow"
|
||||
priority = 990
|
||||
description = "Allow glob."
|
||||
|
||||
[[rule]]
|
||||
toolName = "google_web_search"
|
||||
decision = "allow"
|
||||
priority = 990
|
||||
description = "Allow google_web_search."
|
||||
|
||||
[[rule]]
|
||||
toolName = "read_file"
|
||||
decision = "allow"
|
||||
priority = 990
|
||||
description = "Allow read_file."
|
||||
|
||||
[[rule]]
|
||||
toolName = "list_directory"
|
||||
decision = "allow"
|
||||
priority = 990
|
||||
description = "Allow list_directory."
|
||||
|
||||
[[rule]]
|
||||
toolName = "save_memory"
|
||||
decision = "allow"
|
||||
priority = 990
|
||||
description = "Allow save_memory."
|
||||
|
||||
[[rule]]
|
||||
toolName = "grep_search"
|
||||
decision = "allow"
|
||||
priority = 990
|
||||
description = "Allow grep_search."
|
||||
|
||||
[[rule]]
|
||||
toolName = "run_shell_command"
|
||||
decision = "allow"
|
||||
priority = 990
|
||||
description = "Allow run_shell_command."
|
||||
|
||||
[[rule]]
|
||||
toolName = "tier1-orchestrator"
|
||||
decision = "allow"
|
||||
priority = 990
|
||||
description = "Allow tier1-orchestrator."
|
||||
|
||||
[[rule]]
|
||||
toolName = "tier2-tech-lead"
|
||||
decision = "allow"
|
||||
priority = 990
|
||||
description = "Allow tier2-tech-lead."
|
||||
|
||||
[[rule]]
|
||||
toolName = "tier3-worker"
|
||||
decision = "allow"
|
||||
priority = 990
|
||||
description = "Allow tier3-worker."
|
||||
|
||||
[[rule]]
|
||||
toolName = "tier4-qa"
|
||||
decision = "allow"
|
||||
priority = 990
|
||||
description = "Allow tier4-qa."
|
||||
|
||||
[[rule]]
|
||||
toolName = "web_fetch"
|
||||
decision = "allow"
|
||||
priority = 990
|
||||
description = "Allow web_fetch."
|
||||
|
||||
[[rule]]
|
||||
toolName = "write_file"
|
||||
decision = "allow"
|
||||
priority = 990
|
||||
description = "Allow write_file."
|
||||
29
.gemini/settings.json
Normal file
29
.gemini/settings.json
Normal file
@@ -0,0 +1,29 @@
|
||||
{
|
||||
"experimental": {
|
||||
"enableAgents": true
|
||||
},
|
||||
"tools": {
|
||||
"whitelist": [
|
||||
"*"
|
||||
],
|
||||
"discoveryCommand": "powershell.exe -NoProfile -Command \"Get-Content .gemini/tools.json -Raw\"",
|
||||
"callCommand": "scripts\\tool_call.exe"
|
||||
},
|
||||
"hooks": {
|
||||
"BeforeTool": [
|
||||
{
|
||||
"matcher": "*",
|
||||
"hooks": [
|
||||
{
|
||||
"name": "manual-slop-bridge",
|
||||
"type": "command",
|
||||
"command": "python C:/projects/manual_slop/scripts/cli_tool_bridge.py"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"hooksConfig": {
|
||||
"enabled": true
|
||||
}
|
||||
}
|
||||
1
.gemini/skills/mma-orchestrator
Symbolic link
1
.gemini/skills/mma-orchestrator
Symbolic link
@@ -0,0 +1 @@
|
||||
C:/projects/manual_slop/mma-orchestrator
|
||||
19
.gemini/skills/mma-tier1-orchestrator/SKILL.md
Normal file
19
.gemini/skills/mma-tier1-orchestrator/SKILL.md
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
name: mma-tier1-orchestrator
|
||||
description: Focused on product alignment, high-level planning, and track initialization.
|
||||
---
|
||||
|
||||
# MMA Tier 1: Orchestrator
|
||||
|
||||
You are the Tier 1 Orchestrator. Your role is to oversee the product direction and manage project/track initialization within the Conductor framework.
|
||||
|
||||
## Responsibilities
|
||||
- Maintain alignment with the product guidelines and definition.
|
||||
- Define track boundaries and initialize new tracks (`/conductor:newTrack`).
|
||||
- Set up the project environment (`/conductor:setup`).
|
||||
- Delegate track execution to the Tier 2 Tech Lead.
|
||||
|
||||
## Limitations
|
||||
- Do not execute tracks or implement features.
|
||||
- Do not write code or perform low-level bug fixing.
|
||||
- Keep context strictly focused on product definitions and high-level strategy.
|
||||
21
.gemini/skills/mma-tier2-tech-lead/SKILL.md
Normal file
21
.gemini/skills/mma-tier2-tech-lead/SKILL.md
Normal file
@@ -0,0 +1,21 @@
|
||||
---
|
||||
name: mma-tier2-tech-lead
|
||||
description: Focused on track execution, architectural design, and implementation oversight.
|
||||
---
|
||||
|
||||
# MMA Tier 2: Tech Lead
|
||||
|
||||
You are the Tier 2 Tech Lead. Your role is to manage the implementation of tracks (`/conductor:implement`), ensure architectural integrity, and oversee the work of Tier 3 and 4 sub-agents.
|
||||
|
||||
## Responsibilities
|
||||
- Manage the execution of implementation tracks.
|
||||
- Ensure alignment with `tech-stack.md` and project architecture.
|
||||
- Break down tasks into specific technical steps for Tier 3 Workers.
|
||||
- Maintain persistent context throughout a track's implementation phase (No Context Amnesia).
|
||||
- Review implementations and coordinate bug fixes via Tier 4 QA.
|
||||
|
||||
## Limitations
|
||||
- Do not perform heavy implementation work directly; delegate to Tier 3.
|
||||
- Delegate implementation tasks to Tier 3 Workers using `uv run python scripts/mma_exec.py --role tier3-worker "[PROMPT]"`.
|
||||
- For error analysis of large logs, use `uv run python scripts/mma_exec.py --role tier4-qa "[PROMPT]"`.
|
||||
- Minimize full file reads for large modules; rely on "Skeleton Views" and git diffs.
|
||||
20
.gemini/skills/mma-tier3-worker/SKILL.md
Normal file
20
.gemini/skills/mma-tier3-worker/SKILL.md
Normal file
@@ -0,0 +1,20 @@
|
||||
---
|
||||
name: mma-tier3-worker
|
||||
description: Focused on TDD implementation, surgical code changes, and following specific specs.
|
||||
---
|
||||
|
||||
# MMA Tier 3: Worker
|
||||
|
||||
You are the Tier 3 Worker. Your role is to implement specific, scoped technical requirements, follow Test-Driven Development (TDD), and make surgical code modifications. You operate in a stateless manner (Context Amnesia).
|
||||
|
||||
## Responsibilities
|
||||
- Implement code strictly according to the provided prompt and specifications.
|
||||
- Write failing tests first, then implement the code to pass them.
|
||||
- Ensure all changes are minimal, functional, and conform to the requested standards.
|
||||
- Utilize provided tool access (read_file, write_file, etc.) to perform implementation and verification.
|
||||
|
||||
## Limitations
|
||||
- Do not make architectural decisions.
|
||||
- Do not modify unrelated files beyond the immediate task scope.
|
||||
- Always operate statelessly; assume each task starts with a clean context.
|
||||
- Rely on "Skeleton Views" provided by Tier 2/Orchestrator for understanding dependencies.
|
||||
19
.gemini/skills/mma-tier4-qa/SKILL.md
Normal file
19
.gemini/skills/mma-tier4-qa/SKILL.md
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
name: mma-tier4-qa
|
||||
description: Focused on test analysis, error summarization, and bug reproduction.
|
||||
---
|
||||
|
||||
# MMA Tier 4: QA Agent
|
||||
|
||||
You are the Tier 4 QA Agent. Your role is to analyze error logs, summarize tracebacks, and help diagnose issues efficiently. You operate in a stateless manner (Context Amnesia).
|
||||
|
||||
## Responsibilities
|
||||
- Compress large stack traces or log files into concise, actionable summaries.
|
||||
- Identify the root cause of test failures or runtime errors.
|
||||
- Provide a brief, technical description of the required fix.
|
||||
- Utilize provided diagnostic and exploration tools to verify failures.
|
||||
|
||||
## Limitations
|
||||
- Do not implement the fix directly.
|
||||
- Ensure your output is extremely brief and focused.
|
||||
- Always operate statelessly; assume each analysis starts with a clean context.
|
||||
BIN
.gemini/tools.json
Normal file
BIN
.gemini/tools.json
Normal file
Binary file not shown.
17
.gemini/tools/fetch_url.json
Normal file
17
.gemini/tools/fetch_url.json
Normal file
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"name": "fetch_url",
|
||||
"description": "Fetch the full text content of a URL (stripped of HTML tags).",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"url": {
|
||||
"type": "string",
|
||||
"description": "The full URL to fetch."
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"url"
|
||||
]
|
||||
},
|
||||
"command": "python scripts/tool_call.py fetch_url"
|
||||
}
|
||||
17
.gemini/tools/get_file_summary.json
Normal file
17
.gemini/tools/get_file_summary.json
Normal file
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"name": "get_file_summary",
|
||||
"description": "Get a compact heuristic summary of a file without reading its full content. For Python: imports, classes, methods, functions, constants. For TOML: table keys. For Markdown: headings. Others: line count + preview. Use this before read_file to decide if you need the full content.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": {
|
||||
"type": "string",
|
||||
"description": "Absolute or relative path to the file to summarise."
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"path"
|
||||
]
|
||||
},
|
||||
"command": "python scripts/tool_call.py get_file_summary"
|
||||
}
|
||||
25
.gemini/tools/get_git_diff.json
Normal file
25
.gemini/tools/get_git_diff.json
Normal file
@@ -0,0 +1,25 @@
|
||||
{
|
||||
"name": "get_git_diff",
|
||||
"description": "Returns the git diff for a file or directory. Use this to review changes efficiently without reading entire files.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": {
|
||||
"type": "string",
|
||||
"description": "Path to the file or directory."
|
||||
},
|
||||
"base_rev": {
|
||||
"type": "string",
|
||||
"description": "Base revision (e.g. 'HEAD', 'HEAD~1', or a commit hash). Defaults to 'HEAD'."
|
||||
},
|
||||
"head_rev": {
|
||||
"type": "string",
|
||||
"description": "Head revision (optional)."
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"path"
|
||||
]
|
||||
},
|
||||
"command": "python scripts/tool_call.py get_git_diff"
|
||||
}
|
||||
17
.gemini/tools/py_get_code_outline.json
Normal file
17
.gemini/tools/py_get_code_outline.json
Normal file
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"name": "py_get_code_outline",
|
||||
"description": "Get a hierarchical outline of a code file. This returns classes, functions, and methods with their line ranges and brief docstrings. Use this to quickly map out a file's structure before reading specific sections.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": {
|
||||
"type": "string",
|
||||
"description": "Path to the code file (currently supports .py)."
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"path"
|
||||
]
|
||||
},
|
||||
"command": "python scripts/tool_call.py py_get_code_outline"
|
||||
}
|
||||
17
.gemini/tools/py_get_skeleton.json
Normal file
17
.gemini/tools/py_get_skeleton.json
Normal file
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"name": "py_get_skeleton",
|
||||
"description": "Get a skeleton view of a Python file. This returns all classes and function signatures with their docstrings, but replaces function bodies with '...'. Use this to understand module interfaces without reading the full implementation.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": {
|
||||
"type": "string",
|
||||
"description": "Path to the .py file."
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"path"
|
||||
]
|
||||
},
|
||||
"command": "python scripts/tool_call.py py_get_skeleton"
|
||||
}
|
||||
17
.gemini/tools/run_powershell.json
Normal file
17
.gemini/tools/run_powershell.json
Normal file
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"name": "run_powershell",
|
||||
"description": "Run a PowerShell script within the project base_dir. Use this to create, edit, rename, or delete files and directories. stdout and stderr are returned to you as the result.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"script": {
|
||||
"type": "string",
|
||||
"description": "The PowerShell script to execute."
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"script"
|
||||
]
|
||||
},
|
||||
"command": "python scripts/tool_call.py run_powershell"
|
||||
}
|
||||
22
.gemini/tools/search_files.json
Normal file
22
.gemini/tools/search_files.json
Normal file
@@ -0,0 +1,22 @@
|
||||
{
|
||||
"name": "search_files",
|
||||
"description": "Search for files matching a glob pattern within an allowed directory. Supports recursive patterns like '**/*.py'. Use this to find files by extension or name pattern.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": {
|
||||
"type": "string",
|
||||
"description": "Absolute path to the directory to search within."
|
||||
},
|
||||
"pattern": {
|
||||
"type": "string",
|
||||
"description": "Glob pattern, e.g. '*.py', '**/*.toml', 'src/**/*.rs'."
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"path",
|
||||
"pattern"
|
||||
]
|
||||
},
|
||||
"command": "python scripts/tool_call.py search_files"
|
||||
}
|
||||
17
.gemini/tools/web_search.json
Normal file
17
.gemini/tools/web_search.json
Normal file
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"name": "web_search",
|
||||
"description": "Search the web using DuckDuckGo. Returns the top 5 search results with titles, URLs, and snippets.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"query": {
|
||||
"type": "string",
|
||||
"description": "The search query."
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"query"
|
||||
]
|
||||
},
|
||||
"command": "python scripts/tool_call.py web_search"
|
||||
}
|
||||
BIN
.gitignore
vendored
BIN
.gitignore
vendored
Binary file not shown.
14
.mcp.json
Normal file
14
.mcp.json
Normal file
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"mcpServers": {
|
||||
"manual-slop": {
|
||||
"type": "stdio",
|
||||
"command": "C:\\Users\\Ed\\scoop\\apps\\uv\\current\\uv.exe",
|
||||
"args": [
|
||||
"run",
|
||||
"python",
|
||||
"C:\\projects\\manual_slop\\scripts\\mcp_server.py"
|
||||
],
|
||||
"env": {}
|
||||
}
|
||||
}
|
||||
}
|
||||
58
ARCHITECTURE.md
Normal file
58
ARCHITECTURE.md
Normal file
@@ -0,0 +1,58 @@
|
||||
# ARCHITECTURE.md
|
||||
|
||||
## Tech Stack
|
||||
- **Framework**: [Primary framework/language]
|
||||
- **Database**: [Database system]
|
||||
- **Frontend**: [Frontend technology]
|
||||
- **Backend**: [Backend technology]
|
||||
- **Infrastructure**: [Hosting/deployment]
|
||||
- **Build Tools**: [Build system]
|
||||
|
||||
## Directory Structure
|
||||
```
|
||||
project/
|
||||
├── src/ # Source code
|
||||
├── tests/ # Test files
|
||||
├── docs/ # Documentation
|
||||
├── config/ # Configuration files
|
||||
└── scripts/ # Build/deployment scripts
|
||||
```
|
||||
|
||||
## Key Architectural Decisions
|
||||
|
||||
### [Decision 1]
|
||||
**Context**: [Why this decision was needed]
|
||||
**Decision**: [What was decided]
|
||||
**Rationale**: [Why this approach was chosen]
|
||||
**Consequences**: [Trade-offs and implications]
|
||||
|
||||
## Component Architecture
|
||||
|
||||
### [ComponentName] Structure <!-- #component-anchor -->
|
||||
```typescript
|
||||
// Major classes with exact line numbers
|
||||
class MainClass { /* lines 100-500 */ } // <!-- #main-class -->
|
||||
class Helper { /* lines 501-600 */ } // <!-- #helper-class -->
|
||||
```
|
||||
|
||||
## System Flow Diagram
|
||||
```
|
||||
[User] -> [Frontend] -> [API] -> [Database]
|
||||
| |
|
||||
v v
|
||||
[Cache] [External Service]
|
||||
```
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### [Pattern Name]
|
||||
**When to use**: [Circumstances]
|
||||
**Implementation**: [How to implement]
|
||||
**Example**: [Code example with line numbers]
|
||||
|
||||
## Keywords <!-- #keywords -->
|
||||
- architecture
|
||||
- system design
|
||||
- tech stack
|
||||
- components
|
||||
- patterns
|
||||
103
BUILD.md
Normal file
103
BUILD.md
Normal file
@@ -0,0 +1,103 @@
|
||||
# BUILD.md
|
||||
|
||||
## Prerequisites
|
||||
- [Runtime requirements]
|
||||
- [Development tools needed]
|
||||
- [Environment setup]
|
||||
|
||||
## Build Commands
|
||||
|
||||
### Development
|
||||
```bash
|
||||
# Start development server
|
||||
npm run dev
|
||||
|
||||
# Run in watch mode
|
||||
npm run watch
|
||||
```
|
||||
|
||||
### Production
|
||||
```bash
|
||||
# Build for production
|
||||
npm run build
|
||||
|
||||
# Start production server
|
||||
npm start
|
||||
```
|
||||
|
||||
### Testing
|
||||
```bash
|
||||
# Run all tests
|
||||
npm test
|
||||
|
||||
# Run tests in watch mode
|
||||
npm run test:watch
|
||||
|
||||
# Run specific test file
|
||||
npm test -- filename
|
||||
```
|
||||
|
||||
### Linting & Formatting
|
||||
```bash
|
||||
# Lint code
|
||||
npm run lint
|
||||
|
||||
# Fix linting issues
|
||||
npm run lint:fix
|
||||
|
||||
# Format code
|
||||
npm run format
|
||||
```
|
||||
|
||||
## CI/CD Pipeline
|
||||
|
||||
### GitHub Actions
|
||||
```yaml
|
||||
# .github/workflows/main.yml
|
||||
name: CI/CD
|
||||
on: [push, pull_request]
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v3
|
||||
with:
|
||||
node-version: '18'
|
||||
- run: npm ci
|
||||
- run: npm test
|
||||
- run: npm run build
|
||||
```
|
||||
|
||||
## Deployment
|
||||
|
||||
### Staging
|
||||
1. [Deployment steps]
|
||||
2. [Verification steps]
|
||||
|
||||
### Production
|
||||
1. [Pre-deployment checklist]
|
||||
2. [Deployment steps]
|
||||
3. [Post-deployment verification]
|
||||
|
||||
## Rollback Procedures
|
||||
1. [Emergency rollback steps]
|
||||
2. [Database rollback if needed]
|
||||
3. [Verification steps]
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
**Issue**: [Problem description]
|
||||
**Solution**: [How to fix]
|
||||
|
||||
### Build Failures
|
||||
- [Common build errors and solutions]
|
||||
|
||||
## Keywords <!-- #keywords -->
|
||||
- build
|
||||
- deployment
|
||||
- ci/cd
|
||||
- testing
|
||||
- production
|
||||
118
CLAUDE.md
Normal file
118
CLAUDE.md
Normal file
@@ -0,0 +1,118 @@
|
||||
# CLAUDE.md
|
||||
<!-- Generated by Claude Conductor v2.0.0 -->
|
||||
|
||||
This file provides guidance to Claude Code when working with this repository.
|
||||
|
||||
## Critical Context (Read First)
|
||||
- **Tech Stack**: Python 3.11+, Dear PyGui / ImGui, FastAPI, Uvicorn
|
||||
- **Main File**: `gui_2.py` (primary GUI), `ai_client.py` (multi-provider LLM abstraction)
|
||||
- **Core Mechanic**: GUI orchestrator for LLM-driven coding with 4-tier MMA architecture
|
||||
- **Key Integration**: Gemini API, Anthropic API, DeepSeek, Gemini CLI (headless), MCP tools
|
||||
- **Platform Support**: Windows (PowerShell) — single developer, local use
|
||||
- **DO NOT**: Read full files >50 lines without using `py_get_skeleton` or `get_file_summary` first. Do NOT perform heavy implementation directly — delegate to Tier 3 Workers.
|
||||
|
||||
## Environment
|
||||
- Shell: PowerShell (pwsh) on Windows
|
||||
- Do NOT use bash-specific syntax (use PowerShell equivalents)
|
||||
- Use `uv run` for all Python execution
|
||||
- Path separators: forward slashes work in PowerShell
|
||||
- **Shell execution in Claude Code**: The `Bash` tool runs in a mingw sandbox on Windows and produces unreliable/empty output. Use `run_powershell` MCP tool for ALL shell commands (git, tests, scans). Bash is last-resort only when MCP server is not running.
|
||||
|
||||
## Session Startup Checklist
|
||||
**IMPORTANT**: At the start of each session:
|
||||
1. **Check TASKS.md** — look for IN_PROGRESS or BLOCKED tracks
|
||||
2. **Review recent JOURNAL.md entries** — scan last 2-3 entries for context
|
||||
3. **If resuming work**: run `/conductor-setup` to load full context
|
||||
4. **If starting fresh**: run `/conductor-status` for overview
|
||||
|
||||
## Quick Reference
|
||||
**GUI Entry**: `gui_2.py` — Primary ImGui interface
|
||||
**AI Client**: `ai_client.py` — Multi-provider abstraction (Gemini, Anthropic, DeepSeek)
|
||||
**MCP Client**: `mcp_client.py:773-831` — Tool dispatch (26 tools)
|
||||
**Project Manager**: `project_manager.py` — Context & file management
|
||||
**MMA Engine**: `multi_agent_conductor.py:15-100` — ConductorEngine orchestration
|
||||
**Tech Lead**: `conductor_tech_lead.py` — Tier 2 ticket generation
|
||||
**DAG Engine**: `dag_engine.py` — Task dependency resolution
|
||||
**Session Logger**: `session_logger.py` — Audit trails (JSON-L + markdown)
|
||||
**Shell Runner**: `shell_runner.py` — PowerShell execution (60s timeout)
|
||||
**Models**: `models.py:6-84` — Ticket and Track data structures
|
||||
**File Cache**: `file_cache.py` — ASTParser with tree-sitter skeletons
|
||||
**Summarizer**: `summarize.py` — Heuristic file summaries
|
||||
**Outliner**: `outline_tool.py` — Code outline with line ranges
|
||||
|
||||
## Conductor System
|
||||
The project uses a spec-driven track system in `conductor/`:
|
||||
- **Tracks**: `conductor/tracks/{name}_{YYYYMMDD}/` — spec.md, plan.md, metadata.json
|
||||
- **Workflow**: `conductor/workflow.md` — full task lifecycle and TDD protocol
|
||||
- **Tech Stack**: `conductor/tech-stack.md` — technology constraints
|
||||
- **Product**: `conductor/product.md` — product vision and guidelines
|
||||
|
||||
### Conductor Commands (Claude Code slash commands)
|
||||
- `/conductor-setup` — bootstrap session with conductor context
|
||||
- `/conductor-status` — show all track status
|
||||
- `/conductor-new-track` — create a new track (Tier 1)
|
||||
- `/conductor-implement` — execute a track (Tier 2 — delegates to Tier 3/4)
|
||||
- `/conductor-verify` — phase completion verification and checkpointing
|
||||
|
||||
### MMA Tier Commands
|
||||
- `/mma-tier1-orchestrator` — product alignment, planning
|
||||
- `/mma-tier2-tech-lead` — track execution, architectural oversight
|
||||
- `/mma-tier3-worker` — stateless TDD implementation
|
||||
- `/mma-tier4-qa` — stateless error analysis
|
||||
|
||||
### Delegation (Tier 2 spawns Tier 3/4)
|
||||
```powershell
|
||||
uv run python scripts\claude_mma_exec.py --role tier3-worker "Task prompt here"
|
||||
uv run python scripts\claude_mma_exec.py --role tier4-qa "Error analysis prompt"
|
||||
```
|
||||
|
||||
## Current State
|
||||
- [x] Multi-provider AI client (Gemini, Anthropic, DeepSeek)
|
||||
- [x] Dear PyGui / ImGui GUI with multi-panel interface
|
||||
- [x] MMA 4-tier orchestration engine
|
||||
- [x] Custom MCP tools (26 tools via mcp_client.py)
|
||||
- [x] Session logging and audit trails
|
||||
- [x] Gemini CLI headless adapter
|
||||
- [x] Claude Code conductor integration
|
||||
- [~] AI-Optimized Python Style Refactor (Phase 3 — type hints for UI modules)
|
||||
- [~] Robust Live Simulation Verification (Phase 2 — Epic/Track verification)
|
||||
- [ ] Documentation Refresh and Context Cleanup
|
||||
|
||||
## Development Workflow
|
||||
1. Run `/conductor-setup` to load session context
|
||||
2. Pick active track from `TASKS.md` or `/conductor-status`
|
||||
3. Run `/conductor-implement` to resume track execution
|
||||
4. Follow TDD: Red (failing tests) → Green (pass) → Refactor
|
||||
5. Delegate implementation to Tier 3 Workers, errors to Tier 4 QA
|
||||
6. On phase completion: run `/conductor-verify` for checkpoint
|
||||
|
||||
## Anti-Patterns (Avoid These)
|
||||
- **Don't read full large files** — use `py_get_skeleton`, `get_file_summary`, `py_get_code_outline` first (Research-First Protocol)
|
||||
- **Don't implement directly as Tier 2** — delegate to Tier 3 Workers via `claude_mma_exec.py`
|
||||
- **Don't skip TDD** — write failing tests before implementation
|
||||
- **Don't modify tech stack silently** — update `conductor/tech-stack.md` BEFORE implementing
|
||||
- **Don't skip phase verification** — run `/conductor-verify` when all tasks in a phase are `[x]`
|
||||
- **Don't mix track work** — stay focused on one track at a time
|
||||
|
||||
## MCP Tools (available via manual-slop MCP server)
|
||||
When the MCP server is running, these tools are available natively:
|
||||
`py_get_skeleton`, `py_get_code_outline`, `py_get_definition`, `py_update_definition`,
|
||||
`py_get_signature`, `py_set_signature`, `py_get_class_summary`, `py_find_usages`,
|
||||
`py_get_imports`, `py_check_syntax`, `py_get_hierarchy`, `py_get_docstring`,
|
||||
`get_file_summary`, `get_file_slice`, `set_file_slice`, `get_git_diff`, `get_tree`,
|
||||
`search_files`, `read_file`, `list_directory`, `web_search`, `fetch_url`,
|
||||
`run_powershell`, `get_ui_performance`, `py_get_var_declaration`, `py_set_var_declaration`
|
||||
|
||||
## Journal Update Requirements
|
||||
Update JOURNAL.md after:
|
||||
- Completing any significant feature or fix
|
||||
- Encountering and resolving errors
|
||||
- End of each work session
|
||||
- Making architectural decisions
|
||||
Format: What/Why/How/Issues/Result structure
|
||||
|
||||
## Task Management Integration
|
||||
- **TASKS.md**: Quick-read pointer to active conductor tracks
|
||||
- **conductor/tracks/*/plan.md**: Detailed task state (source of truth)
|
||||
- **JOURNAL.md**: Completed work history with `|TASK:ID|` tags
|
||||
- **ERRORS.md**: P0/P1 error tracking
|
||||
511
CONDUCTOR.md
Normal file
511
CONDUCTOR.md
Normal file
@@ -0,0 +1,511 @@
|
||||
# CONDUCTOR.md
|
||||
<!-- Generated by Claude Conductor v2.0.0 -->
|
||||
|
||||
> _Read me first. Every other doc is linked below._
|
||||
|
||||
## Critical Context (Read First)
|
||||
- **Tech Stack**: [List core technologies]
|
||||
- **Main File**: [Primary code file and line count]
|
||||
- **Core Mechanic**: [One-line description]
|
||||
- **Key Integration**: [Important external services]
|
||||
- **Platform Support**: [Deployment targets]
|
||||
- **DO NOT**: [Critical things to avoid]
|
||||
|
||||
## Table of Contents
|
||||
1. [Architecture](ARCHITECTURE.md) - Tech stack, folder structure, infrastructure
|
||||
2. [Design Tokens](DESIGN.md) - Colors, typography, visual system
|
||||
3. [UI/UX Patterns](UIUX.md) - Components, interactions, accessibility
|
||||
4. [Runtime Config](CONFIG.md) - Environment variables, feature flags
|
||||
5. [Data Model](DATA_MODEL.md) - Database schema, entities, relationships
|
||||
6. [API Contracts](API.md) - Endpoints, request/response formats, auth
|
||||
7. [Build & Release](BUILD.md) - Build process, deployment, CI/CD
|
||||
8. [Testing Guide](TEST.md) - Test strategies, E2E scenarios, coverage
|
||||
9. [Operational Playbooks](PLAYBOOKS/DEPLOY.md) - Deployment, rollback, monitoring
|
||||
10. [Contributing](CONTRIBUTING.md) - Code style, PR process, conventions
|
||||
11. [Error Ledger](ERRORS.md) - Critical P0/P1 error tracking
|
||||
12. [Task Management](TASKS.md) - Active tasks, phase tracking, context preservation
|
||||
|
||||
## Quick Reference
|
||||
**Main Constants**: `[file:lines]` - Description
|
||||
**Core Class**: `[file:lines]` - Description
|
||||
**Key Function**: `[file:lines]` - Description
|
||||
[Include 10-15 most accessed code locations]
|
||||
|
||||
## Current State
|
||||
- [x] Feature complete
|
||||
- [ ] Feature in progress
|
||||
- [ ] Feature planned
|
||||
[Track active work]
|
||||
|
||||
## Development Workflow
|
||||
[5-6 steps for common workflow]
|
||||
|
||||
## Task Templates
|
||||
### 1. [Common Task Name]
|
||||
1. Step with file:line reference
|
||||
2. Step with specific action
|
||||
3. Test step
|
||||
4. Documentation update
|
||||
|
||||
[Include 3-5 templates]
|
||||
|
||||
## Anti-Patterns (Avoid These)
|
||||
❌ **Don't [action]** - [Reason]
|
||||
[List 5-6 critical mistakes]
|
||||
|
||||
## Version History
|
||||
- **v1.0.0** - Initial release
|
||||
- **v1.1.0** - Feature added (see JOURNAL.md YYYY-MM-DD)
|
||||
[Link major versions to journal entries]
|
||||
|
||||
## Continuous Engineering Journal <!-- do not remove -->
|
||||
|
||||
Claude, keep an ever-growing changelog in [`JOURNAL.md`](JOURNAL.md).
|
||||
|
||||
### What to Journal
|
||||
- **Major changes**: New features, significant refactors, API changes
|
||||
- **Bug fixes**: What broke, why, and how it was fixed
|
||||
- **Frustration points**: Problems that took multiple attempts to solve
|
||||
- **Design decisions**: Why we chose one approach over another
|
||||
- **Performance improvements**: Before/after metrics
|
||||
- **User feedback**: Notable issues or requests
|
||||
- **Learning moments**: New techniques or patterns discovered
|
||||
|
||||
### Journal Format
|
||||
\```
|
||||
## YYYY-MM-DD HH:MM
|
||||
|
||||
### [Short Title]
|
||||
- **What**: Brief description of the change
|
||||
- **Why**: Reason for the change
|
||||
- **How**: Technical approach taken
|
||||
- **Issues**: Any problems encountered
|
||||
- **Result**: Outcome and any metrics
|
||||
|
||||
### [Short Title] |ERROR:ERR-YYYY-MM-DD-001|
|
||||
- **What**: Critical P0/P1 error description
|
||||
- **Why**: Root cause analysis
|
||||
- **How**: Fix implementation
|
||||
- **Issues**: Debugging challenges
|
||||
- **Result**: Resolution and prevention measures
|
||||
|
||||
### [Task Title] |TASK:TASK-YYYY-MM-DD-001|
|
||||
- **What**: Task implementation summary
|
||||
- **Why**: Part of [Phase Name] phase
|
||||
- **How**: Technical approach and key decisions
|
||||
- **Issues**: Blockers encountered and resolved
|
||||
- **Result**: Task completed, findings documented in ARCHITECTURE.md
|
||||
\```
|
||||
|
||||
### Compaction Rule
|
||||
When `JOURNAL.md` exceeds **500 lines**:
|
||||
1. Claude summarizes the oldest half into `JOURNAL_ARCHIVE/<year>-<month>.md`
|
||||
2. Remaining entries stay in `JOURNAL.md` so the file never grows unbounded
|
||||
|
||||
> ⚠️ Claude must NEVER delete raw history—only move & summarize.
|
||||
|
||||
### 2. ARCHITECTURE.md
|
||||
**Purpose**: System design, tech stack decisions, and code structure with line numbers.
|
||||
|
||||
**Required Elements**:
|
||||
- Technology stack listing
|
||||
- Directory structure diagram
|
||||
- Key architectural decisions with rationale
|
||||
- Component architecture with exact line numbers
|
||||
- System flow diagram (ASCII art)
|
||||
- Common patterns section
|
||||
- Keywords for search optimization
|
||||
|
||||
**Line Number Format**:
|
||||
\```
|
||||
#### ComponentName Structure <!-- #component-anchor -->
|
||||
\```typescript
|
||||
// Major classes with exact line numbers
|
||||
class MainClass { /* lines 100-500 */ } // <!-- #main-class -->
|
||||
class Helper { /* lines 501-600 */ } // <!-- #helper-class -->
|
||||
\```
|
||||
\```
|
||||
|
||||
### 3. DESIGN.md
|
||||
**Purpose**: Visual design system, styling, and theming documentation.
|
||||
|
||||
**Required Sections**:
|
||||
- Typography system
|
||||
- Color palette (with hex values)
|
||||
- Visual effects specifications
|
||||
- Character/entity design
|
||||
- UI/UX component styling
|
||||
- Animation system
|
||||
- Mobile design considerations
|
||||
- Accessibility guidelines
|
||||
- Keywords section
|
||||
|
||||
### 4. DATA_MODEL.md
|
||||
**Purpose**: Database schema, application models, and data structures.
|
||||
|
||||
**Required Elements**:
|
||||
- Database schema (SQL)
|
||||
- Application data models (TypeScript/language interfaces)
|
||||
- Validation rules
|
||||
- Common queries
|
||||
- Data migration history
|
||||
- Keywords for entities
|
||||
|
||||
### 5. API.md
|
||||
**Purpose**: Complete API documentation with examples.
|
||||
|
||||
**Structure for Each Endpoint**:
|
||||
\```
|
||||
### Endpoint Name
|
||||
|
||||
\```http
|
||||
METHOD /api/endpoint
|
||||
\```
|
||||
|
||||
#### Request
|
||||
\```json
|
||||
{
|
||||
"field": "type"
|
||||
}
|
||||
\```
|
||||
|
||||
#### Response
|
||||
\```json
|
||||
{
|
||||
"field": "value"
|
||||
}
|
||||
\```
|
||||
|
||||
#### Details
|
||||
- **Rate limit**: X requests per Y seconds
|
||||
- **Auth**: Required/Optional
|
||||
- **Notes**: Special considerations
|
||||
\```
|
||||
|
||||
### 6. CONFIG.md
|
||||
**Purpose**: Runtime configuration, environment variables, and settings.
|
||||
|
||||
**Required Sections**:
|
||||
- Environment variables (required and optional)
|
||||
- Application configuration constants
|
||||
- Feature flags
|
||||
- Performance tuning settings
|
||||
- Security configuration
|
||||
- Common patterns for configuration changes
|
||||
|
||||
### 7. BUILD.md
|
||||
**Purpose**: Build process, deployment, and CI/CD documentation.
|
||||
|
||||
**Include**:
|
||||
- Prerequisites
|
||||
- Build commands
|
||||
- CI/CD pipeline configuration
|
||||
- Deployment steps
|
||||
- Rollback procedures
|
||||
- Troubleshooting guide
|
||||
|
||||
### 8. TEST.md
|
||||
**Purpose**: Testing strategies, patterns, and examples.
|
||||
|
||||
**Sections**:
|
||||
- Test stack and tools
|
||||
- Running tests commands
|
||||
- Test structure
|
||||
- Coverage goals
|
||||
- Common test patterns
|
||||
- Debugging tests
|
||||
|
||||
### 9. UIUX.md
|
||||
**Purpose**: Interaction patterns, user flows, and behavior specifications.
|
||||
|
||||
**Cover**:
|
||||
- Input methods
|
||||
- State transitions
|
||||
- Component behaviors
|
||||
- User flows
|
||||
- Accessibility patterns
|
||||
- Performance considerations
|
||||
|
||||
### 10. CONTRIBUTING.md
|
||||
**Purpose**: Guidelines for contributors.
|
||||
|
||||
**Include**:
|
||||
- Code of conduct
|
||||
- Development setup
|
||||
- Code style guide
|
||||
- Commit message format
|
||||
- PR process
|
||||
- Common patterns
|
||||
|
||||
### 11. PLAYBOOKS/DEPLOY.md
|
||||
**Purpose**: Step-by-step operational procedures.
|
||||
|
||||
**Format**:
|
||||
- Pre-deployment checklist
|
||||
- Deployment steps (multiple options)
|
||||
- Post-deployment verification
|
||||
- Rollback procedures
|
||||
- Troubleshooting
|
||||
|
||||
### 12. ERRORS.md (Critical Error Ledger)
|
||||
**Purpose**: Track and resolve P0/P1 critical errors with full traceability.
|
||||
|
||||
**Required Structure**:
|
||||
\```
|
||||
# Critical Error Ledger <!-- auto-maintained -->
|
||||
|
||||
## Schema
|
||||
| ID | First seen | Status | Severity | Affected area | Link to fix |
|
||||
|----|------------|--------|----------|---------------|-------------|
|
||||
|
||||
## Active Errors
|
||||
[New errors added here, newest first]
|
||||
|
||||
## Resolved Errors
|
||||
[Moved here when fixed, with links to fixes]
|
||||
\```
|
||||
|
||||
**Error ID Format**: `ERR-YYYY-MM-DD-001` (increment for multiple per day)
|
||||
|
||||
**Severity Definitions**:
|
||||
- **P0**: Complete outage, data loss, security breach
|
||||
- **P1**: Major functionality broken, significant performance degradation
|
||||
- **P2**: Minor functionality (not tracked in ERRORS.md)
|
||||
- **P3**: Cosmetic issues (not tracked in ERRORS.md)
|
||||
|
||||
**Claude's Error Logging Process**:
|
||||
1. When P0/P1 error occurs, immediately add to Active Errors
|
||||
2. Create corresponding JOURNAL.md entry with details
|
||||
3. When resolved:
|
||||
- Move to Resolved Errors section
|
||||
- Update status to "resolved"
|
||||
- Add commit hash and PR link
|
||||
- Add `|ERROR:<ID>|` tag to JOURNAL.md entry
|
||||
- Link back to JOURNAL entry from ERRORS.md
|
||||
|
||||
### 13. TASKS.md (Active Task Management)
|
||||
**Purpose**: Track ongoing work with phase awareness and context preservation between sessions.
|
||||
|
||||
**IMPORTANT**: TASKS.md complements Claude's built-in todo system - it does NOT replace it:
|
||||
- Claude's todos: For immediate task tracking within a session
|
||||
- TASKS.md: For preserving context and state between sessions
|
||||
|
||||
**Required Structure**:
|
||||
```
|
||||
# Task Management
|
||||
|
||||
## Active Phase
|
||||
**Phase**: [High-level project phase name]
|
||||
**Started**: YYYY-MM-DD
|
||||
**Target**: YYYY-MM-DD
|
||||
**Progress**: X/Y tasks completed
|
||||
|
||||
## Current Task
|
||||
**Task ID**: TASK-YYYY-MM-DD-NNN
|
||||
**Title**: [Descriptive task name]
|
||||
**Status**: PLANNING | IN_PROGRESS | BLOCKED | TESTING | COMPLETE
|
||||
**Started**: YYYY-MM-DD HH:MM
|
||||
**Dependencies**: [List task IDs this depends on]
|
||||
|
||||
### Task Context
|
||||
<!-- Critical information needed to resume this task -->
|
||||
- **Previous Work**: [Link to related tasks/PRs]
|
||||
- **Key Files**: [Primary files being modified with line ranges]
|
||||
- **Environment**: [Specific config/versions if relevant]
|
||||
- **Next Steps**: [Immediate actions when resuming]
|
||||
|
||||
### Findings & Decisions
|
||||
- **FINDING-001**: [Discovery that affects approach]
|
||||
- **DECISION-001**: [Technical choice made] → Link to ARCHITECTURE.md
|
||||
- **BLOCKER-001**: [Issue preventing progress] → Link to resolution
|
||||
|
||||
### Task Chain
|
||||
1. ✅ [Completed prerequisite task] (TASK-YYYY-MM-DD-001)
|
||||
2. 🔄 [Current task] (CURRENT)
|
||||
3. ⏳ [Next planned task]
|
||||
4. ⏳ [Future task in phase]
|
||||
```
|
||||
|
||||
**Task Management Rules**:
|
||||
1. **One Active Task**: Only one task should be IN_PROGRESS at a time
|
||||
2. **Context Capture**: Before switching tasks, capture all context needed to resume
|
||||
3. **Findings Documentation**: Record unexpected discoveries that impact the approach
|
||||
4. **Decision Linking**: Link architectural decisions to ARCHITECTURE.md
|
||||
5. **Completion Trigger**: When task completes:
|
||||
- Generate JOURNAL.md entry with task summary
|
||||
- Archive task details to TASKS_ARCHIVE/YYYY-MM/TASK-ID.md
|
||||
- Load next task from chain or prompt for new phase
|
||||
|
||||
**Task States**:
|
||||
- **PLANNING**: Defining approach and breaking down work
|
||||
- **IN_PROGRESS**: Actively working on implementation
|
||||
- **BLOCKED**: Waiting on external dependency or decision
|
||||
- **TESTING**: Implementation complete, validating functionality
|
||||
- **COMPLETE**: Task finished and documented
|
||||
|
||||
**Integration with Journal**:
|
||||
- Each completed task auto-generates a journal entry
|
||||
- Journal references task ID for full context
|
||||
- Critical findings promoted to relevant documentation
|
||||
|
||||
## Documentation Optimization Rules
|
||||
|
||||
### 1. Line Number Anchors
|
||||
- Add exact line numbers for every class, function, and major code section
|
||||
- Format: `**Class Name (Lines 100-200)**`
|
||||
- Add HTML anchors: `<!-- #class-name -->`
|
||||
- Update when code structure changes significantly
|
||||
|
||||
### 2. Quick Reference Card
|
||||
- Place in CLAUDE.md after Table of Contents
|
||||
- Include 10-15 most common code locations
|
||||
- Format: `**Feature**: `file:lines` - Description`
|
||||
|
||||
### 3. Current State Tracking
|
||||
- Use checkbox format in CLAUDE.md
|
||||
- `- [x] Completed feature`
|
||||
- `- [ ] In-progress feature`
|
||||
- Update after each work session
|
||||
|
||||
### 4. Task Templates
|
||||
- Provide 3-5 step-by-step workflows
|
||||
- Include specific line numbers
|
||||
- Reference files that need updating
|
||||
- Add test/verification steps
|
||||
|
||||
### 5. Keywords Sections
|
||||
- Add to each major .md file
|
||||
- List alternative search terms
|
||||
- Format: `## Keywords <!-- #keywords -->`
|
||||
- Include synonyms and related terms
|
||||
|
||||
### 6. Anti-Patterns
|
||||
- Use ❌ emoji for clarity
|
||||
- Explain why each is problematic
|
||||
- Include 5-6 critical mistakes
|
||||
- Place prominently in CLAUDE.md
|
||||
|
||||
### 7. System Flow Diagrams
|
||||
- Use ASCII art for simplicity
|
||||
- Show data/control flow
|
||||
- Keep visual and readable
|
||||
- Place in ARCHITECTURE.md
|
||||
|
||||
### 8. Common Patterns
|
||||
- Add to relevant docs (CONFIG.md, ARCHITECTURE.md)
|
||||
- Show exact code changes needed
|
||||
- Include before/after examples
|
||||
- Reference specific functions
|
||||
|
||||
### 9. Version History
|
||||
- Link to JOURNAL.md entries
|
||||
- Format: `v1.0.0 - Feature (see JOURNAL.md YYYY-MM-DD)`
|
||||
- Track major changes only
|
||||
|
||||
### 10. Cross-Linking
|
||||
- Link between related sections
|
||||
- Use relative paths: `[Link](./FILE.md#section)`
|
||||
- Ensure bidirectional linking where appropriate
|
||||
|
||||
## Journal System Setup
|
||||
|
||||
### JOURNAL.md Structure
|
||||
\```
|
||||
# Engineering Journal
|
||||
|
||||
## YYYY-MM-DD HH:MM
|
||||
|
||||
### [Descriptive Title]
|
||||
- **What**: Brief description of the change
|
||||
- **Why**: Reason for the change
|
||||
- **How**: Technical approach taken
|
||||
- **Issues**: Any problems encountered
|
||||
- **Result**: Outcome and any metrics
|
||||
|
||||
---
|
||||
|
||||
[Entries continue chronologically]
|
||||
\```
|
||||
|
||||
### Journal Best Practices
|
||||
1. **Entry Timing**: Add entry immediately after significant work
|
||||
2. **Detail Level**: Include enough detail to understand the change months later
|
||||
3. **Problem Documentation**: Especially document multi-attempt solutions
|
||||
4. **Learning Moments**: Capture new techniques discovered
|
||||
5. **Metrics**: Include performance improvements, time saved, etc.
|
||||
|
||||
### Archive Process
|
||||
When JOURNAL.md exceeds 500 lines:
|
||||
1. Create `JOURNAL_ARCHIVE/` directory
|
||||
2. Move oldest 250 lines to `JOURNAL_ARCHIVE/YYYY-MM.md`
|
||||
3. Add summary header to archive file
|
||||
4. Keep recent entries in main JOURNAL.md
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
### Phase 1: Initial Setup (30-60 minutes)
|
||||
1. **Create CLAUDE.md** with all required sections
|
||||
2. **Fill Critical Context** with 6 essential facts
|
||||
3. **Create Table of Contents** with placeholder links
|
||||
4. **Add Quick Reference** with top 10-15 code locations
|
||||
5. **Set up Journal section** with formatting rules
|
||||
|
||||
### Phase 2: Core Documentation (2-4 hours)
|
||||
1. **Create each .md file** from the list above
|
||||
2. **Add Keywords section** to each file
|
||||
3. **Cross-link between files** where relevant
|
||||
4. **Add line numbers** to code references
|
||||
5. **Create PLAYBOOKS/ directory** with DEPLOY.md
|
||||
6. **Create ERRORS.md** with schema table
|
||||
|
||||
### Phase 3: Optimization (1-2 hours)
|
||||
1. **Add Task Templates** to CLAUDE.md
|
||||
2. **Create ASCII system flow** in ARCHITECTURE.md
|
||||
3. **Add Common Patterns** sections
|
||||
4. **Document Anti-Patterns**
|
||||
5. **Set up Version History**
|
||||
|
||||
### Phase 4: First Journal Entry
|
||||
Create initial JOURNAL.md entry documenting the setup:
|
||||
\```
|
||||
## YYYY-MM-DD HH:MM
|
||||
|
||||
### Documentation Framework Implementation
|
||||
- **What**: Implemented CLAUDE.md modular documentation system
|
||||
- **Why**: Improve AI navigation and code maintainability
|
||||
- **How**: Split monolithic docs into focused modules with cross-linking
|
||||
- **Issues**: None - clean implementation
|
||||
- **Result**: [Number] documentation files created with full cross-referencing
|
||||
\```
|
||||
|
||||
## Maintenance Guidelines
|
||||
|
||||
### Daily
|
||||
- Update JOURNAL.md with significant changes
|
||||
- Mark completed items in Current State
|
||||
- Update line numbers if major refactoring
|
||||
|
||||
### Weekly
|
||||
- Review and update Quick Reference section
|
||||
- Check for broken cross-links
|
||||
- Update Task Templates if workflows change
|
||||
|
||||
### Monthly
|
||||
- Review Keywords sections for completeness
|
||||
- Update Version History
|
||||
- Check if JOURNAL.md needs archiving
|
||||
|
||||
### Per Release
|
||||
- Update Version History in CLAUDE.md
|
||||
- Create comprehensive JOURNAL.md entry
|
||||
- Review all documentation for accuracy
|
||||
- Update Current State checklist
|
||||
|
||||
## Benefits of This System
|
||||
|
||||
1. **AI Efficiency**: Claude can quickly navigate to exact code locations
|
||||
2. **Modularity**: Easy to update specific documentation without affecting others
|
||||
3. **Discoverability**: New developers/AI can quickly understand the project
|
||||
4. **History Tracking**: Complete record of changes and decisions
|
||||
5. **Task Automation**: Templates reduce repetitive instructions
|
||||
6. **Error Prevention**: Anti-patterns prevent common mistakes
|
||||
34
Dockerfile
Normal file
34
Dockerfile
Normal file
@@ -0,0 +1,34 @@
|
||||
# Use python:3.11-slim as a base
|
||||
FROM python:3.11-slim
|
||||
|
||||
# Set environment variables
|
||||
# UV_SYSTEM_PYTHON=1 allows uv to install into the system site-packages
|
||||
ENV PYTHONDONTWRITEBYTECODE=1
|
||||
PYTHONUNBUFFERED=1
|
||||
UV_SYSTEM_PYTHON=1
|
||||
|
||||
# Install system dependencies and uv
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends
|
||||
curl
|
||||
ca-certificates
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
&& curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||
&& mv /root/.local/bin/uv /usr/local/bin/uv
|
||||
|
||||
# Set the working directory in the container
|
||||
WORKDIR /app
|
||||
|
||||
# Copy dependency files first to leverage Docker layer caching
|
||||
COPY pyproject.toml requirements.txt* ./
|
||||
|
||||
# Install dependencies via uv
|
||||
RUN if [ -f requirements.txt ]; then uv pip install --no-cache -r requirements.txt; fi
|
||||
|
||||
# Copy the rest of the application code
|
||||
COPY . .
|
||||
|
||||
# Expose port 8000 for the headless API/service
|
||||
EXPOSE 8000
|
||||
|
||||
# Set the entrypoint to run the app in headless mode
|
||||
ENTRYPOINT ["python", "gui_2.py", "--headless"]
|
||||
47
GEMINI.md
Normal file
47
GEMINI.md
Normal file
@@ -0,0 +1,47 @@
|
||||
# Project Overview
|
||||
|
||||
**Manual Slop** is a local GUI application designed as an experimental, "manual" AI coding assistant. It allows users to curate and send context (files, screenshots, and discussion history) to AI APIs (Gemini and Anthropic). The AI can then execute PowerShell scripts within the project directory to modify files, requiring explicit user confirmation before execution.
|
||||
|
||||
**Main Technologies:**
|
||||
* **Language:** Python 3.11+
|
||||
* **Package Management:** `uv`
|
||||
* **GUI Framework:** Dear PyGui (`dearpygui`), ImGui Bundle (`imgui-bundle`)
|
||||
* **AI SDKs:** `google-genai` (Gemini), `anthropic`
|
||||
* **Configuration:** TOML (`tomli-w`)
|
||||
|
||||
**Architecture:**
|
||||
* **`gui_legacy.py`:** The main entry point and Dear PyGui application logic. Handles all panels, layouts, user input, and confirmation dialogs.
|
||||
* **`ai_client.py`:** A unified wrapper for both Gemini and Anthropic APIs. Manages sessions, tool/function-call loops, token estimation, and context history management.
|
||||
* **`aggregate.py`:** Responsible for building the `file_items` context. It reads project configurations, collects files and screenshots, and builds the context into markdown format to send to the AI.
|
||||
* **`mcp_client.py`:** Implements MCP-like tools (e.g., `read_file`, `list_directory`, `search_files`, `web_search`) as native functions that the AI can call. Enforces a strict allowlist for file access.
|
||||
* **`shell_runner.py`:** A sandboxed subprocess wrapper that executes PowerShell scripts (`powershell -NoProfile -NonInteractive -Command`) provided by the AI.
|
||||
* **`project_manager.py`:** Manages per-project TOML configurations (`manual_slop.toml`), serializes discussion entries, and integrates with git (e.g., fetching current commit).
|
||||
* **`session_logger.py`:** Handles timestamped logging of communication history (JSON-L) and tool calls (saving generated `.ps1` files).
|
||||
|
||||
# Building and Running
|
||||
|
||||
* **Setup:** The application uses `uv` for dependency management. Ensure `uv` is installed.
|
||||
* **Credentials:** You must create a `credentials.toml` file in the root directory to store your API keys:
|
||||
```toml
|
||||
[gemini]
|
||||
api_key = "****"
|
||||
[anthropic]
|
||||
api_key = "****"
|
||||
```
|
||||
* **Run the Application:**
|
||||
```powershell
|
||||
uv run .\gui_2.py
|
||||
```
|
||||
|
||||
# Development Conventions
|
||||
|
||||
* **Configuration Management:** The application uses two tiers of configuration:
|
||||
* `config.toml`: Global settings (UI theme, active provider, list of project paths).
|
||||
* `manual_slop.toml`: Per-project settings (files to track, discussion history, specific system prompts).
|
||||
* **Tool Execution:** The AI acts primarily by generating PowerShell scripts. These scripts MUST be confirmed by the user via a GUI modal before execution. The AI also has access to read-only MCP-style file exploration tools and web search capabilities.
|
||||
* **Context Refresh:** After every tool call that modifies the file system, the application automatically refreshes the file contents in the context using the files' `mtime` to optimize reads.
|
||||
* **UI State Persistence:** Window layouts and docking arrangements are automatically saved to and loaded from `dpg_layout.ini`.
|
||||
* **Code Style:**
|
||||
* Use type hints where appropriate.
|
||||
* Internal methods and variables are generally prefixed with an underscore (e.g., `_flush_to_project`, `_do_generate`).
|
||||
* **Logging:** All API communications are logged to `logs/comms_<ts>.log`. All executed scripts are saved to `scripts/generated/`.
|
||||
13
JOURNAL.md
Normal file
13
JOURNAL.md
Normal file
@@ -0,0 +1,13 @@
|
||||
# Engineering Journal
|
||||
|
||||
## 2026-02-28 14:43
|
||||
|
||||
### Documentation Framework Implementation
|
||||
- **What**: Implemented Claude Conductor modular documentation system
|
||||
- **Why**: Improve AI navigation and code maintainability
|
||||
- **How**: Used `npx claude-conductor` to initialize framework
|
||||
- **Issues**: None - clean implementation
|
||||
- **Result**: Documentation framework successfully initialized
|
||||
|
||||
---
|
||||
|
||||
45
MMA_Support/Architecture_Recommendation.md
Normal file
45
MMA_Support/Architecture_Recommendation.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# MMA Hierarchical Delegation: Recommended Architecture
|
||||
|
||||
## 1. Overview
|
||||
The Multi-Model Architecture (MMA) utilizes a 4-Tier hierarchy to ensure token efficiency and structural integrity. The primary agent (Conductor) acts as the Tier 2 Tech Lead, delegating specific, stateless tasks to Tier 3 (Workers) and Tier 4 (Utility) agents.
|
||||
|
||||
## 2. Agent Roles & Responsibilities
|
||||
|
||||
### Tier 2: The Conductor (Tech Lead)
|
||||
- **Role:** Orchestrator of the project lifecycle via the Conductor framework.
|
||||
- **Context:** High-reasoning, long-term memory of project goals and specifications.
|
||||
- **Key Tool:** `mma-orchestrator` skill (Strategy).
|
||||
- **Delegation Logic:** Identifies tasks that would bloat the primary context (large code blocks, massive error traces) and spawns sub-agents.
|
||||
|
||||
### Tier 3: The Worker (Contributor)
|
||||
- **Role:** Stateless code generator.
|
||||
- **Context:** Isolated. Sees only the target file and the specific ticket.
|
||||
- **Protocol:** Receives a "Worker" system prompt. Outputs clean code or diffs.
|
||||
- **Invocation:** `.\scripts\run_subagent.ps1 -Role Worker -Prompt "..."`
|
||||
|
||||
### Tier 4: The Utility (QA/Compressor)
|
||||
- **Role:** Stateless translator and summarizer.
|
||||
- **Context:** Minimal. Sees only the error trace or snippet.
|
||||
- **Protocol:** Receives a "QA" system prompt. Outputs compressed findings (max 50 tokens).
|
||||
- **Invocation:** `.\scripts\run_subagent.ps1 -Role QA -Prompt "..."`
|
||||
|
||||
## 3. Invocation Protocol
|
||||
|
||||
### Step 1: Detection
|
||||
Tier 2 detects a delegation trigger:
|
||||
- Coding task > 50 lines.
|
||||
- Error trace > 100 lines.
|
||||
|
||||
### Step 2: Spawning
|
||||
Tier 2 calls the delegation script:
|
||||
```powershell
|
||||
.\scripts\run_subagent.ps1 -Role <Worker|QA> -Prompt "Specific instructions..."
|
||||
```
|
||||
|
||||
### Step 3: Integration
|
||||
Tier 2 receives the sub-agent's response.
|
||||
- **If Worker:** Tier 2 applies the code changes (using `replace` or `write_file`) and verifies.
|
||||
- **If QA:** Tier 2 uses the compressed error to inform the next fix attempt or passes it to a Worker.
|
||||
|
||||
## 4. System Prompt Management
|
||||
The `run_subagent.ps1` script should be updated to maintain a library of role-specific system prompts, ensuring that Tier 3/4 agents remain focused and tool-free (to prevent nested complexity).
|
||||
32
MMA_Support/Data_Pipelines_and_Config.md
Normal file
32
MMA_Support/Data_Pipelines_and_Config.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# Data Pipelines, Memory Views & Configuration
|
||||
|
||||
The 4-Tier Architecture relies on strictly managed data pipelines and configuration files to prevent token bloat and maintain a deterministically safe execution environment.
|
||||
|
||||
## 1. AST Extraction Pipelines (Memory Views)
|
||||
|
||||
To prevent LLMs from hallucinating or consuming massive context windows, raw file text is heavily restricted. The `file_cache.py` uses Tree-sitter for deterministic Abstract Syntax Tree (AST) parsing to generate specific views:
|
||||
|
||||
1. **The Directory Map (Tier 1):** Just filenames and nested paths (e.g., output of `tree /F`). No source code.
|
||||
2. **The Skeleton View (Tier 2 & 3 Dependencies):** Extracts only `class` and `def` signatures, parameters, and type hints. Strips all docstrings and function bodies, replacing them with `pass`. Used for foreign modules a worker must call but not modify.
|
||||
3. **The Curated Implementation View (Tier 2 Target Modules):**
|
||||
* Keeps class/struct definitions.
|
||||
* Keeps module-level docstrings and block comments (heuristics).
|
||||
* Keeps full bodies of functions marked with `@core_logic` or `# [HOT]`.
|
||||
* Replaces standard function bodies with `... # Hidden`.
|
||||
4. **The Raw View (Tier 3 Target File):** Unredacted, line-by-line source code of the *single* file a Tier 3 worker is assigned to modify.
|
||||
|
||||
## 2. Configuration Schema
|
||||
|
||||
The architecture separates sensitive billing logic from AI behavior routing.
|
||||
|
||||
* **`credentials.toml` (Security Prerequisite):** Holds the bare metal authentication (`gemini_api_key`, `anthropic_api_key`, `deepseek_api_key`). **This file must be in `.gitignore`.** Loaded strictly for instantiating HTTP clients.
|
||||
* **`project.toml` (Repo Rules):** Holds repository-specific bounds (e.g., "This project uses Python 3.12 and strictly follows PEP8").
|
||||
* **`agents.toml` (AI Routing):** Defines the hardcoded hierarchy's operational behaviors. Includes fallback models (`default_expensive`, `default_cheap`), Tier 1/2 overarching parameters (temperature, base system prompts), and Tier 3 worker archetypes (`refactor`, `codegen`, `contract_stubber`) mapped to specific models (DeepSeek V3, Gemini Flash) and `trust_level` tags (`step` vs. `auto`).
|
||||
|
||||
## 3. LLM Output Formats
|
||||
|
||||
To ensure robust parser execution and avoid JSON string-escaping nightmares, the architecture uses a hybrid approach for LLM outputs depending on the Tier:
|
||||
|
||||
* **Native Structured Outputs (JSON Schema forced by API):** Used for Tier 1 and Tier 2 routing and orchestration. The model provider mathematically guarantees the syntax, allowing clean parsing of `Track` and `Ticket` metadata by `pydantic`.
|
||||
* **XML Tags (`<file_path>`, `<file_content>`):** Used for Tier 3 Code Generation & Tools. It natively isolates syntax and requires zero string escaping. The UI/Orchestrator parses these via regex to safely extract raw Python code without bracket-matching failures.
|
||||
* **Godot ECS Flat List (Linearized Entities with ID Pointers):** Instead of deeply nested JSON (which models hallucinate across 500 tokens), Tier 1/2 Orchestrators define complex dependency DAGs as a flat list of items (e.g., `[Ticket id="tkt_impl" depends_on="tkt_stub"]`). The Python state machine reconstructs the DAG locally.
|
||||
30
MMA_Support/Final_Analysis_Report.md
Normal file
30
MMA_Support/Final_Analysis_Report.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# MMA Tiered Architecture: Final Analysis Report
|
||||
|
||||
## 1. Executive Summary
|
||||
The implementation and verification of the 4-Tier Hierarchical Multi-Model Architecture (MMA) within the Conductor framework have been successfully completed. The architecture provides a robust "Token Firewall" that prevents the primary context from being bloated by repetitive coding tasks and massive error traces.
|
||||
|
||||
## 2. Architectural Findings
|
||||
|
||||
### Centralized Strategy vs. Role-Based Sub-Agents
|
||||
- **Decision:** A Hybrid Approach was implemented.
|
||||
- **Rationale:** The Tier 2 Orchestrator (Conductor) maintains the high-level strategy via a centralized skill, while Tier 3 (Worker) and Tier 4 (QA) agents are governed by surgical, role-specific system prompts. This ensures that sub-agents remain focused and stateless without the overhead of complex, nested tool-usage logic.
|
||||
|
||||
### Delegation Efficacy
|
||||
- **Tier 3 (Worker):** Successfully isolated code generation from the main conversation. The worker generates clean code/diffs that are then integrated by the Orchestrator.
|
||||
- **Tier 4 (QA):** Demonstrated superior token efficiency by compressing multi-hundred-line stack traces into ~20-word actionable fixes.
|
||||
- **Traceability:** The `-ShowContext` flag in `scripts/run_subagent.ps1` provides immediate visibility into the "Connective Tissue" of the hierarchy, allowing human supervisors to monitor the hand-offs.
|
||||
|
||||
## 3. Recommended Protocol (Final)
|
||||
|
||||
1. **Identification:** Tier 2 identifies a "Bloat Trigger" (Coding > 50 lines, Errors > 100 lines).
|
||||
2. **Delegation:** Tier 2 spawns a sub-agent via `.\scripts
|
||||
un_subagent.ps1 -Role [Worker|QA] -Prompt "..."`.
|
||||
3. **Integration:** Tier 2 receives the stateless response and applies it to the project state.
|
||||
4. **Checkpointing:** Tier 2 performs Phase-level checkpoints to "Wipe" trial-and-error memory and solidify the new state.
|
||||
|
||||
## 4. Verification Results
|
||||
- **Automated Tests:** 100% Pass (4/4 tests in `tests/conductor/test_infrastructure.py`).
|
||||
- **Isolation:** Confirmed via `test_subagent_isolation_live`.
|
||||
- **Live Trace:** Manually verified and approved by the user (Tier 2 -> 3 -> 4 flow).
|
||||
|
||||
## 5. Conclusion
|
||||
46
MMA_Support/Implementation_Tracks.md
Normal file
46
MMA_Support/Implementation_Tracks.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# Iteration Plan (Implementation Tracks)
|
||||
|
||||
To safely refactor a linear, single-agent codebase into the 4-Tier Multi-Model Architecture without breaking the working prototype, the implementation should be sequenced into these five isolated Epics (Tracks):
|
||||
|
||||
## Track 1: The Memory Foundations (AST Parser)
|
||||
**Goal:** Build the engine that prevents token-bloat by turning massive source files into curated memory views.
|
||||
**Implementation Details:**
|
||||
1. Integrate `tree-sitter` and language bindings into `file_cache.py`.
|
||||
2. Build `ASTParser` extraction rules:
|
||||
* *Skeleton View:* Strip function/class bodies, preserving only signatures, parameters, and type hints.
|
||||
* *Curated View:* Preserve class structures, module docstrings, and bodies of functions marked `# [HOT]` or `@core_logic`. Replace standard bodies with `... # Hidden`.
|
||||
3. **Acceptance:** `file_cache.get_curated_view('script.py')` returns a perfectly formatted summary string in the terminal.
|
||||
|
||||
## Track 2: State Machine & Data Structures
|
||||
**Goal:** Define the rigid Python objects the AI agents will pass to each other to rely on structured data, not loose chat strings.
|
||||
**Implementation Details:**
|
||||
1. Create `models.py` with `pydantic` or `dataclasses` for `Track` (Epic) and `Ticket` (Task).
|
||||
2. Define `WorkerContext` holding the Ticket ID, assigned model (from `agents.toml`), isolated `credentials.toml` injection, and a `messages` payload array.
|
||||
3. Add helper methods for state mutators (e.g., `ticket.mark_blocked()`, `ticket.mark_complete()`).
|
||||
4. **Acceptance:** Instantiate a `Track` with 3 `Tickets` and successfully enforce state changes in Python without AI involvement.
|
||||
|
||||
## Track 3: The Linear Orchestrator & Execution Clutch
|
||||
**Goal:** Build the synchronous, debuggable core loop that runs a single Tier 3 Worker and pauses for human approval.
|
||||
**Implementation Details:**
|
||||
1. Create `multi_agent_conductor.py` with a `run_worker_lifecycle(ticket: Ticket)` function.
|
||||
2. Inject context (Raw View from `file_cache.py`) and format the `messages` array for the API.
|
||||
3. Implement the Clutch (HITL): `input()` pause for CLI or wait state for GUI before executing the returned tool (e.g., `write_file`). Allow manual memory mutation of the JSON payload.
|
||||
4. **Acceptance:** The script sends a hardcoded Ticket to DeepSeek, pauses in the terminal showing a diff, waits for user approval, applies the diff via `mcp_client.py`, and wipes the worker's history.
|
||||
|
||||
## Track 4: Tier 4 QA Interception
|
||||
**Goal:** Stop error traces from destroying the Worker's token window by routing crashes through a stateless translator.
|
||||
**Implementation Details:**
|
||||
1. In `shell_runner.py`, intercept `stderr` (e.g., `returncode != 0`).
|
||||
2. Do *not* append `stderr` to the main Worker's history. Instead, instantiate a synchronous API call to the `default_cheap` model.
|
||||
3. Prompt: *"You are an error parser. Output only a 1-2 sentence instruction on how to fix this syntax error."* Send the raw `stderr` and target file snippet.
|
||||
4. Append the translated 20-word fix to the main Worker's history as a "System Hint".
|
||||
5. **Acceptance:** A deliberate syntax error triggers the execution engine to silently ping the cheap API, returning a 20-word correction to the Worker instead of a 200-line stack trace.
|
||||
|
||||
## Track 5: UI Decoupling & Tier 1/2 Routing (The Final Boss)
|
||||
**Goal:** Bring the system online by letting Tier 1 and Tier 2 dynamically generate Tickets managed by the async Event Bus.
|
||||
**Implementation Details:**
|
||||
1. Implement an `asyncio.Queue` in `multi_agent_conductor.py`.
|
||||
2. Write Tier 1 & 2 system prompts forcing output as strict JSON arrays (Tracks and Tickets).
|
||||
3. Write the Dispatcher async loop to convert JSON into `Ticket` objects and push to the queue.
|
||||
4. Enforce the Stub Resolver: If a Ticket archetype is `contract_stubber`, pause dependent Tickets, run the stubber, trigger `file_cache.py` to rebuild the Skeleton View, then resume.
|
||||
5. **Acceptance:** Vague prompt ("Refactor config system") results in Tier 1 Track, Tier 2 Tickets (Interface stub + Implementation). System executes stub, updates AST, and finishes implementation automatically (or steps through if Linear toggle is on).
|
||||
37
MMA_Support/Orchestrator_Engine.md
Normal file
37
MMA_Support/Orchestrator_Engine.md
Normal file
@@ -0,0 +1,37 @@
|
||||
# The Orchestrator Engine & UI
|
||||
|
||||
To transition from a linear, single-agent chat box to a multi-agent control center, the GUI must be decoupled from the LLM execution loops. A single-agent UI assumes a linear flow (*User types -> UI waits -> LLM responds -> UI updates*), which freezes the application if a Tier 1 PM waits for human approval while Tier 3 Workers run local tests in the background.
|
||||
|
||||
## 1. The Async Event Bus (Decoupling UI from Agents)
|
||||
|
||||
The GUI acts as a "dumb" renderer. It only renders state; it never manages state.
|
||||
|
||||
* **The Agent Bus (Message Queue):** A thread-safe signaling system (e.g., `asyncio.Queue`, `pyqtSignal`) passes messages between agents, UI, and the filesystem.
|
||||
* **Background Workers:** When Tier 1 spawns a Tier 2 Tech Lead, the GUI does not wait. It pushes a `UserRequestEvent` to the Conductor's queue. The Conductor runs the LLM call asynchronously and fires `StateUpdateEvents` back for the GUI to redraw.
|
||||
|
||||
## 2. The Execution Clutch (HITL)
|
||||
|
||||
Every spawned worker panel implements an execution state toggle based on the `trust_level` defined in `agents.toml`.
|
||||
|
||||
* **Step Mode (Lock-step):** The worker pauses **twice** per cycle:
|
||||
1. *After* generating a response/tool-call, but *before* executing the tool. The GUI renders a preview (e.g., diff of lines 40-50) and offers `[Approve]`, `[Edit Payload]`, or `[Abort]`.
|
||||
2. *After* executing the tool, but *before* sending output back to the LLM (allows verification of the system output).
|
||||
* **Auto Mode (Fire-and-forget):** The worker loops continuously until it outputs a "Task Complete" status to the Router.
|
||||
|
||||
## 3. Memory Mutation (The "Debug" Superpower)
|
||||
|
||||
If a worker generates a flawed plan in Step Mode, the "Memory Mutator" allows the user to click the last message and edit the raw JSON/text directly before hitting "Approve." By rewriting the AI's brain mid-task, the model proceeds as if it generated the correct idea, saving the context window from restarting due to a minor hallucination.
|
||||
|
||||
## 4. The Global Execution Toggle
|
||||
|
||||
A Global Execution Toggle overrides all individual agent trust levels for debugging race conditions or context leaks.
|
||||
|
||||
* **Mode = "async" (Production):** The Dispatcher throws Tickets into an `asyncio.TaskGroup`. They spawn instantly, fight for API rate limits, read the skeleton, and run in parallel.
|
||||
* **Mode = "linear" (Debug):** The Dispatcher iterates through the array sequentially using a strict `for` loop. It `awaits` absolute completion of Ticket 1 (including QA loops and code review) before instantiating the `WorkerAgent` for Ticket 2. This enforces a deterministic state machine and outputs state snapshots (`debug_state.json`) for manual verification.
|
||||
|
||||
## 5. State Machine (Dataclasses)
|
||||
|
||||
The Conductor relies on strict definitions for `Track` and `Ticket` to enforce state and UI rendering (e.g., using `dataclasses` or `pydantic`).
|
||||
|
||||
* **`Ticket`:** Contains `id`, `target_file`, `prompt`, `worker_archetype`, `status` (pending, running, blocked, step_paused, completed), and a `dependencies` list of Ticket IDs that must finish first.
|
||||
* **`Track`:** Contains `id`, `title`, `description`, `status`, and a list of `Tickets`.
|
||||
1545
MMA_Support/OriginalDiscussion.md
Normal file
1545
MMA_Support/OriginalDiscussion.md
Normal file
File diff suppressed because it is too large
Load Diff
18
MMA_Support/Overview.md
Normal file
18
MMA_Support/Overview.md
Normal file
@@ -0,0 +1,18 @@
|
||||
# System Specification: 4-Tier Hierarchical Multi-Model Architecture
|
||||
|
||||
**Project:** `manual_slop` (or equivalent Agentic Co-Dev Prototype)
|
||||
|
||||
**Core Philosophy:** Token Economy, Strict Memory Siloing, and Human-In-The-Loop (HITL) Execution.
|
||||
|
||||
## 1. Architectural Overview
|
||||
|
||||
This system rejects the "monolithic black-box" approach to agentic coding. Instead of passing an entire codebase into a single expensive context window, the architecture mimics a senior engineering department. It uses a 4-Tier hierarchy where cognitive load and context are aggressively filtered from top to bottom.
|
||||
|
||||
Expensive, high-reasoning models manage metadata and architecture (Tier 1 & 2), while cheap, fast models handle repetitive syntax and error parsing (Tier 3 & 4).
|
||||
|
||||
### 1.1 Core Paradigms
|
||||
|
||||
* **Token Firewalling:** Error logs and deep history are never allowed to bubble up to high-tier models. The system relies heavily on abstracted AST views (Skeleton, Curated) rather than raw code when context allows.
|
||||
* **Context Amnesia:** Worker agents (Tier 3) have their trial-and-error histories wiped upon task completion to prevent context ballooning and hallucination.
|
||||
* **The Execution Clutch (HITL):** Agents operate based on Archetype Trust Scores defined in configuration. Trusted patterns run in `Auto` mode; untrusted or complex refactors run in `Step` mode, pausing before tool execution for human review and JSON history mutation.
|
||||
* **Interface-Driven Development (IDD):** The architecture inherently prioritizes the creation of contracts (stubs, schemas) before implementation, allowing workers to proceed in parallel without breaking cross-module boundaries.
|
||||
38
MMA_Support/Tier1_Orchestrator.md
Normal file
38
MMA_Support/Tier1_Orchestrator.md
Normal file
@@ -0,0 +1,38 @@
|
||||
# Tier 1: The Top-Level Orchestrator (Product Manager)
|
||||
|
||||
**Designated Models:** Gemini 3.1 Pro, Claude 3.5 Sonnet.
|
||||
**Execution Frequency:** Low (Start of feature, Macro-merge resolution).
|
||||
**Core Role:** Epic planning, architecture enforcement, and cross-module task delegation.
|
||||
|
||||
The Tier 1 Orchestrator is the most capable and expensive model in the hierarchy. It operates strictly on metadata, summaries, and executive-level directives. It **never** sees raw implementation code.
|
||||
|
||||
## Memory Context & Paths
|
||||
|
||||
### Path A: Epic Initialization (Project Planning)
|
||||
* **Trigger:** User drops a massive new feature request or architectural shift into the main UI.
|
||||
* **What it Sees (Context):**
|
||||
* **The User Prompt:** The raw feature request.
|
||||
* **Project Meta-State:** `project.toml` (rules, allowed languages, dependencies).
|
||||
* **Repository Map:** A strict, file-tree outline (names and paths only).
|
||||
* **Global Architecture Docs:** High-level markdown files (e.g., `docs/guide_architecture.md`).
|
||||
* **What it Ignores:** All source code, all AST skeletons, and all previous micro-task histories.
|
||||
* **Output Format:** A JSON array (Godot ECS Flat List format) of `Tracks` (Jira Epics), identifying which modules will be affected, the required Tech Lead persona, and the severity level.
|
||||
|
||||
### Path B: Track Delegation (Sprint Kickoff)
|
||||
* **Trigger:** The PM is handing a defined Track down to a Tier 2 Tech Lead.
|
||||
* **What it Sees (Context):**
|
||||
* **The Target Track:** The specific goal and Acceptance Criteria generated in Path A.
|
||||
* **Module Interfaces (Skeleton View):** Strict AST skeleton (just class/function definitions) *only* for the modules this specific Track is allowed to touch.
|
||||
* **Track Roster:** A list of currently active or completed Tracks to prevent duplicate work.
|
||||
* **What it Ignores:** Unrelated module docs, original massive user prompt, implementation details.
|
||||
* **Output Format:** A compiled "Track Brief" (system prompt + curated file list) passed to instantiate the Tier 2 Tech Lead panel.
|
||||
|
||||
### Path C: Macro-Merge & Acceptance Review (Severity Resolution)
|
||||
* **Trigger:** A Tier 2 Tech Lead reports "Track Complete" and submits a pull request/diff for a "High Severity" task.
|
||||
* **What it Sees (Context):**
|
||||
* **Original Acceptance Criteria:** The Track's goals.
|
||||
* **Tech Lead's Executive Summary:** A ~200-word explanation of the chosen implementation algorithm.
|
||||
* **The Macro-Diff:** Actual changes made to the codebase.
|
||||
* **Curated Implementation View:** For boundary files, ensuring the merge doesn't break foreign modules.
|
||||
* **What it Ignores:** Tier 3 Worker trial-and-error histories, Tier 4 error logs, raw bodies of unchanged functions.
|
||||
* **Output Format:** "Approved" (commits to memory) OR "Rejected" with specific architectural feedback for Tier 2.
|
||||
46
MMA_Support/Tier2_TechLead.md
Normal file
46
MMA_Support/Tier2_TechLead.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# Tier 2: The Track Conductor (Tech Lead)
|
||||
|
||||
**Designated Models:** Gemini 3.0 Flash, Gemini 2.5 Pro.
|
||||
**Execution Frequency:** Medium.
|
||||
**Core Role:** Module-specific planning, code review, spawning Worker agents, and Topological Dependency Graph management.
|
||||
|
||||
The Tech Lead bridges the gap between high-level architecture and actual code syntax. It operates in a "need-to-know" state, utilizing AST parsing (`file_cache.py`) to keep token counts low while maintaining structural awareness of its assigned modules.
|
||||
|
||||
## Memory Context & Paths
|
||||
|
||||
### Path A: Sprint Planning (Task Delegation)
|
||||
* **Trigger:** Tier 1 (PM) assigns a Track (Epic) and wakes up the Tech Lead.
|
||||
* **What it Sees (Context):**
|
||||
* **The Track Brief:** Acceptance Criteria from Tier 1.
|
||||
* **Curated Implementation View (Target Modules):** AST-extracted class structures, docstrings, and `# [HOT]` function bodies for the 1-3 files this Track explicitly modifies.
|
||||
* **Skeleton View (Foreign Modules):** Only function signatures and return types for external dependencies.
|
||||
* **What it Ignores:** The rest of the repository, the PM's overarching project-planning logic, raw line-by-line code of non-hot functions.
|
||||
* **Output Format:** A JSON array (Godot ECS Flat List format) of discrete Tier 3 `Tickets` (e.g., Ticket 1: *Write DB migration script*, Ticket 2: *Update core API endpoints*), including `depends_on` pointers to construct an execution DAG.
|
||||
|
||||
### Path B: Code Review (Local Integration)
|
||||
* **Trigger:** A Tier 3 Contributor completes a Ticket and submits a diff, OR Tier 4 (QA) flags a persistent failure.
|
||||
* **What it Sees (Context):**
|
||||
* **Specific Ticket Goal:** What the Contributor was instructed to do.
|
||||
* **Proposed Diff:** The exact line changes submitted by Tier 3.
|
||||
* **Test/QA Output:** Relevant logs from Tier 4 compiler checks.
|
||||
* **Curated Implementation View:** To cross-reference the proposed diff against the existing architecture.
|
||||
* **What it Ignores:** The Contributor's internal trial-and-error chat history. It only sees the final submission.
|
||||
* **Output Format:** *Approve* (merges diff into working branch and updates Curated View) or *Reject* (sends technical critique back to Tier 3).
|
||||
|
||||
### Path C: Track Finalization (Upward Reporting)
|
||||
* **Trigger:** All Tier 3 Tickets assigned to this Track are marked "Approved."
|
||||
* **What it Sees (Context):**
|
||||
* **Original Track Brief:** To verify requirements were met.
|
||||
* **Aggregated Track Diff:** The sum total of all changes made across all Tier 3 Tickets.
|
||||
* **Dependency Delta:** A list of any new foreign modules or libraries imported.
|
||||
* **What it Ignores:** The back-and-forth review cycles, original AST Curated View.
|
||||
* **Output Format:** An Executive Summary and the final Macro-Diff, sent back to Tier 1.
|
||||
|
||||
### Path D: Contract-First Delegation (Stub-and-Resolve)
|
||||
* **Trigger:** Tier 2 evaluates a Track and detects a cross-module dependency (or a single massive refactor) requiring an undefined signature.
|
||||
* **Role:** Force Interface-Driven Development (IDD) to prevent hallucination.
|
||||
* **Execution Flow:**
|
||||
1. **Contract Definition:** Splits requirement into a `Stub Ticket`, `Consumer Ticket`, and `Implementation Ticket`.
|
||||
2. **Stub Generation:** Spawns a cheap Tier 3 worker (e.g., DeepSeek V3 `contract_stubber` archetype) to generate the empty function signature, type hints, and docstrings.
|
||||
3. **Skeleton Broadcast:** The stub merges, and the system instantly re-runs Tree-sitter to update the global Skeleton View.
|
||||
4. **Parallel Implementation:** Tier 2 simultaneously spawns the `Consumer` (codes against the skeleton) and the `Implementer` (fills the stub logic) in isolated contexts.
|
||||
35
MMA_Support/Tier3_Workers.md
Normal file
35
MMA_Support/Tier3_Workers.md
Normal file
@@ -0,0 +1,35 @@
|
||||
# Tier 3: The Worker Agents (Contributors)
|
||||
|
||||
**Designated Models:** DeepSeek V3/R1, Gemini 2.5 Flash.
|
||||
**Execution Frequency:** High (The core loop).
|
||||
**Core Role:** Generating syntax, writing localized files, running unit tests.
|
||||
|
||||
The engine room of the system. Contributors execute the highest volume of API calls. Their memory context is ruthlessly pruned. By leveraging cheap, fast models, they operate with zero architectural anxiety—they just write the code they are assigned. They are "Amnesiac Workers," having their history wiped between tasks to prevent context ballooning.
|
||||
|
||||
## Memory Context & Paths
|
||||
|
||||
### Path A: Heads Down Execution (Task Execution)
|
||||
* **Trigger:** Tier 2 (Tech Lead) hands down a hyper-specific Ticket.
|
||||
* **What it Sees (Context):**
|
||||
* **The Ticket Prompt:** The exact, isolated instructions from Tier 2.
|
||||
* **The Target File (Raw View):** The raw, unredacted, line-by-line source code of *only* the specific file (or class/function) it was assigned to modify.
|
||||
* **Foreign Interfaces (Skeleton View):** Strict AST skeleton (signatures only) of external dependencies required by the ticket.
|
||||
* **What it Ignores:** Epic/Track goals, Tech Lead's Curated View, other files in the same directory, parallel Tickets.
|
||||
* **Output Format:** XML Tags (`<file_path>`, `<file_content>`) defining direct file modifications or `mcp_client.py` tool payloads.
|
||||
|
||||
### Path B: Trial and Error (Local Iteration & Tool Execution)
|
||||
* **Trigger:** The Contributor runs a local linter/test, encounters a syntax error, or the human pauses execution using "Step" mode.
|
||||
* **What it Sees (Context):**
|
||||
* **Ephemeral Working History:** A short, rolling window of its last 2–3 attempts (e.g., "Attempt 1: Wrote code -> Tool Output: SyntaxError").
|
||||
* **Tier 4 (QA) Injections:** Compressed (20-50 token) fix recommendations from Tier 4 agents (e.g., "Add a closing bracket on line 42").
|
||||
* **Human Mutations:** Any direct edits made to its JSON history payload before proceeding.
|
||||
* **What it Ignores:** Tech Lead code reviews, attempts older than the rolling window (wiped to save tokens).
|
||||
* **Output Format:** Revised tool payloads until tests pass or the human approves.
|
||||
|
||||
### Path C: Task Submission (Micro-Pull Request)
|
||||
* **Trigger:** The code executes cleanly, and "Step" mode is finalized into "Task Complete."
|
||||
* **What it Sees (Context):**
|
||||
* **The Original Ticket:** To confirm instructions were met.
|
||||
* **The Final State:** The cleanly modified file or exact diff.
|
||||
* **What it Ignores:** **All of Path B.** Before submission to Tier 2, the orchestrator wipes the messy trial-and-error history from the payload.
|
||||
* **Output Format:** A concise completion message and the clean diff, sent up to Tier 2.
|
||||
33
MMA_Support/Tier4_Utility.md
Normal file
33
MMA_Support/Tier4_Utility.md
Normal file
@@ -0,0 +1,33 @@
|
||||
# Tier 4: The Utility Agents (Compiler / QA)
|
||||
|
||||
**Designated Models:** DeepSeek V3 (Lowest cost possible).
|
||||
**Execution Frequency:** On-demand (Intercepts local failures).
|
||||
**Core Role:** Single-shot, stateless translation of machine garbage into human English.
|
||||
|
||||
Tier 4 acts as the financial firewall. It solves the expensive problem of feeding massive (e.g., 3,000-token) stack traces back into a mid-tier LLM's context window. Tier 4 agents wake up, translate errors, and immediately die.
|
||||
|
||||
## Memory Context & Paths
|
||||
|
||||
### Path A: The Stack Trace Interceptor (Translator)
|
||||
* **Trigger:** A Tier 3 Contributor executes a script, resulting in a non-zero exit code with a massive `stderr` payload.
|
||||
* **What it Sees (Context):**
|
||||
* **Raw Error Output:** The exact traceback from the runtime/compiler.
|
||||
* **Offending Snippet:** *Only* the specific function or 20-line block of code where the error originated.
|
||||
* **What it Ignores:** Everything else. It is blind to the "Why" and focuses only on "What broke."
|
||||
* **Output Format:** A surgical, highly compressed string (20-50 tokens) passed back into the Tier 3 Contributor's working memory (e.g., "Syntax Error on line 42: You missed a closing parenthesis. Add `]`").
|
||||
|
||||
### Path B: The Linter / Formatter (Pedant)
|
||||
* **Trigger:** Tier 3 believes it finished a Ticket, but pre-commit hooks (e.g., `ruff`, `eslint`) fail.
|
||||
* **What it Sees (Context):**
|
||||
* **Linter Warning:** Specific error (e.g., "Line too long", "Missing type hint").
|
||||
* **Target File:** Code written by Tier 3.
|
||||
* **What it Ignores:** Business logic. It only cares about styling rules.
|
||||
* **Output Format:** A direct `sed` command or silent diff overwrite via tools to fix the formatting without bothering Tier 2 or consuming Tier 3 loops.
|
||||
|
||||
### Path C: The Flaky Test Debugger (Isolator)
|
||||
* **Trigger:** A localized unit test fails due to logic (e.g., `assert 5 == 4`), not a syntax crash.
|
||||
* **What it Sees (Context):**
|
||||
* **Failing Test Function:** The exact `pytest` or `go test` block.
|
||||
* **Target Function:** The specific function being tested.
|
||||
* **What it Ignores:** The rest of the test suite and module.
|
||||
* **Output Format:** A quick diagnosis sent to Tier 3 (e.g., "The test expects an integer, but your function is currently returning a stringified float. Cast to `int`").
|
||||
66
MMA_Support/mma_tiered_orchestrator_skill.md
Normal file
66
MMA_Support/mma_tiered_orchestrator_skill.md
Normal file
@@ -0,0 +1,66 @@
|
||||
# Skill: MMA Tiered Orchestrator
|
||||
|
||||
## Description
|
||||
This skill enforces the 4-Tier Hierarchical Multi-Model Architecture (MMA) directly within the Gemini CLI using Token Firewalling and sub-agent task delegation. It teaches the CLI how to act as a Tier 1/2 Orchestrator, dispatching stateless tasks to cheaper models using shell commands, thereby preventing massive error traces or heavy coding contexts from polluting the primary prompt context.
|
||||
|
||||
<instructions>
|
||||
# MMA Token Firewall & Tiered Delegation Protocol
|
||||
|
||||
You are operating as a Tier 1 Product Manager or Tier 2 Tech Lead within the MMA Framework. Your context window is extremely valuable and must be protected from token bloat (such as raw, repetitive code edits, trial-and-error histories, or massive stack traces).
|
||||
|
||||
To accomplish this, you MUST delegate token-heavy or stateless tasks to "Tier 3 Contributors" or "Tier 4 QA Agents" by spawning secondary Gemini CLI instances via `run_shell_command`.
|
||||
|
||||
**CRITICAL Prerequisite:**
|
||||
To avoid hanging the CLI and ensure proper environment authentication, you MUST NOT call the `gemini` command directly. Instead, you MUST use the wrapper script:
|
||||
`.\scripts\run_subagent.ps1 -Prompt "..."`
|
||||
|
||||
## 1. The Tier 3 Worker (Heads-Down Coding)
|
||||
When you need to perform a significant code modification (e.g., refactoring a 500-line script, writing a massive class, or implementing a predefined spec):
|
||||
1. **DO NOT** attempt to write or use `replace`/`write_file` yourself. Your history will bloat.
|
||||
2. **DO** construct a single, highly specific prompt.
|
||||
3. **DO** spawn a sub-agent using `run_shell_command` pointing to the target file.
|
||||
*Command:* `.\scripts\run_subagent.ps1 -Prompt "Modify [FILE_PATH] to implement [SPECIFIC_INSTRUCTION]. Only write the code, no pleasantries."`
|
||||
4. If you need the sub-agent to automatically apply changes instead of just returning the text, use `gemini run` or pipe the output appropriately. However, the best method is to let the sub-agent modify the code and return "Done."
|
||||
|
||||
## 2. The Tier 4 QA Agent (Error Translation)
|
||||
If you run a local test (e.g., `npm test`, `pytest`, `go run`) via `run_shell_command` and it fails with a massive traceback (e.g., 200+ lines of `stderr`):
|
||||
1. **DO NOT** analyze the raw `stderr` in your own context window.
|
||||
2. **DO** immediately spawn a stateless Tier 4 agent to compress the error.
|
||||
3. *Command:* `.\scripts\run_subagent.ps1 -Prompt "Summarize this stack trace into a 20-word fix: [PASTE_SNIPPET_OF_STDERR_HERE]"`
|
||||
4. Use the 20-word fix returned by the Tier 4 agent to inform your next architectural decision or pass it to the Tier 3 worker.
|
||||
|
||||
## 3. Context Amnesia (Phase Checkpoints)
|
||||
When you complete a major Phase or Track within the `conductor` workflow:
|
||||
1. Stage your changes and commit them.
|
||||
2. Draft a comprehensive summary of the state changes in a Git Note attached to the commit.
|
||||
3. Treat the checkpoint as a "Memory Wipe." Actively disregard previous conversational turns and trial-and-error histories. Rely exclusively on the newly generated Git Note and the physical state of the files on disk for your next Phase.
|
||||
</instructions>
|
||||
|
||||
<examples>
|
||||
### Example 1: Spawning a Tier 4 QA Agent
|
||||
**User / System:** `pytest tests/test_gui.py` failed with 400 lines of output.
|
||||
**Agent (You):**
|
||||
```json
|
||||
{
|
||||
"command": ".\\scripts\\run_subagent.ps1 -Prompt \"Summarize this stack trace into a 20-word fix: [snip first 30 lines...]\"",
|
||||
"description": "Spawning Tier 4 QA to compress error trace statelessly."
|
||||
}
|
||||
```
|
||||
|
||||
### Example 2: Spawning a Tier 3 Worker
|
||||
**User:** Please implement the `ASTParser` class in `file_cache.py` as defined in Track 1.
|
||||
**Agent (You):**
|
||||
```json
|
||||
{
|
||||
"command": ".\\scripts\\run_subagent.ps1 -Prompt \"Read file_cache.py and implement the ASTParser class using tree-sitter. Ensure you preserve docstrings but strip function bodies. Output the updated code or edit the file directly.\"",
|
||||
"description": "Delegating implementation to a Tier 3 Worker."
|
||||
}
|
||||
```
|
||||
</examples>
|
||||
|
||||
<triggers>
|
||||
- When asked to write large amounts of boilerplate or repetitive code.
|
||||
- When encountering a large error trace from a shell execution.
|
||||
- When explicitly instructed to act as a "Tech Lead" or "Orchestrator".
|
||||
- When managing complex, multi-file Track implementations.
|
||||
</triggers>
|
||||
36
MMA_UX_SPEC.md
Normal file
36
MMA_UX_SPEC.md
Normal file
@@ -0,0 +1,36 @@
|
||||
# MMA Observability & UX Specification
|
||||
|
||||
## 1. Goal
|
||||
Implement the visible surface area of the 4-Tier Hierarchical Multi-Model Architecture within `gui_2.py`. This ensures the user can monitor, control, and debug the multi-agent execution flow.
|
||||
|
||||
## 2. Core Components
|
||||
|
||||
### 2.1 MMA Dashboard Panel
|
||||
- **Visibility:** A new dockable panel named "MMA Dashboard".
|
||||
- **Track Status:** Display the current active `Track` ID and overall progress (e.g., "3/10 Tickets Complete").
|
||||
- **Ticket DAG Visualization:** A list or simple graph representing the `Ticket` queue.
|
||||
- Each ticket shows: `ID`, `Target`, `Status` (Pending, Running, Paused, Complete, Blocked).
|
||||
- Visual indicators for dependencies (e.g., indented or linked).
|
||||
|
||||
### 2.2 The Execution Clutch (HITL)
|
||||
- **Step Mode Toggle:** A global or per-track checkbox to enable "Step Mode".
|
||||
- **Pause Points:**
|
||||
- **Pre-Execution:** When a Tier 3 worker generates a tool call (e.g., `write_file`), the engine pauses.
|
||||
- **UI Interaction:** The GUI displays the proposed script/change and provides:
|
||||
- `[Approve]`: Proceed with execution.
|
||||
- `[Edit Payload]`: Open the Memory Mutator.
|
||||
- `[Abort]`: Mark the ticket as Blocked/Cancelled.
|
||||
- **Visual Feedback:** Tactile/Arcade-style blinking or color changes when the engine is "Paused for HITL".
|
||||
|
||||
### 2.3 Memory Mutator (The "Debug" Superpower)
|
||||
- **Functionality:** A modal or dedicated text area that allows the user to edit the raw JSON conversation history of a paused worker.
|
||||
- **Use Case:** Fixing AI hallucinations or providing specific guidance mid-turn without restarting the context window.
|
||||
- **Integration:** After editing, the "Approve" button sends the *modified* history back to the engine.
|
||||
|
||||
### 2.4 Tiered Metrics & Logs
|
||||
- **Observability:** Show which model (Tier 1, 2, 3, or 4) is currently active.
|
||||
- **Sub-Agent Logs:** Provide quick links to open the timestamped log files generated by `mma_exec.py`.
|
||||
|
||||
## 3. Technical Integration
|
||||
- **Event Bus:** Use the existing `AsyncEventQueue` to push `StateUpdateEvents` from the `ConductorEngine` to the GUI.
|
||||
- **Non-Blocking:** Ensure the UI remains responsive (FPS > 60) even when multiple tickets are processing or the engine is waiting for user input.
|
||||
@@ -12,16 +12,16 @@ Is a local GUI tool for manually curating and sending context to AI APIs. It agg
|
||||
- `uv` - package/env management
|
||||
|
||||
**Files:**
|
||||
- `gui.py` - main GUI, `App` class, all panels, all callbacks, confirmation dialog, layout persistence, rich comms rendering
|
||||
- `ai_client.py` - unified provider wrapper, model listing, session management, send, tool/function-call loop, comms log, provider error classification
|
||||
- `aggregate.py` - reads config, collects files/screenshots/discussion, writes numbered `.md` files to `output_dir`
|
||||
- `gui_legacy.py` - main GUI, `App` class, all panels, all callbacks, confirmation dialog, layout persistence, rich comms rendering; `[+ Maximize]` buttons in `ConfirmDialog` and `win_script_output` now pass text directly as `user_data` / read from `self._last_script` / `self._last_output` instance vars instead of `dpg.get_value(tag)` — fixes glitch when word-wrap is ON or dialog is dismissed before viewer opens
|
||||
- `ai_client.py` - unified provider wrapper, model listing, session management, send, tool/function-call loop, comms log, provider error classification, token estimation, and aggressive history truncation
|
||||
- `aggregate.py` - reads config, collects files/screenshots/discussion, builds `file_items` with `mtime` for cache optimization, writes numbered `.md` files to `output_dir` using `build_markdown_from_items` to avoid double I/O; `run()` returns `(markdown_str, path, file_items)` tuple; `summary_only=False` by default (full file contents sent, not heuristic summaries)
|
||||
- `shell_runner.py` - subprocess wrapper that runs PowerShell scripts sandboxed to `base_dir`, returns stdout/stderr/exit code as a string
|
||||
- `session_logger.py` - opens timestamped log files at session start; writes comms entries as JSON-L and tool calls as markdown; saves each AI-generated script as a `.ps1` file
|
||||
- `project_manager.py` - per-project .toml load/save, entry serialisation (entry_to_str/str_to_entry with @timestamp support), default_project/default_discussion factories, migrate_from_legacy_config, flat_config for aggregate.run(), git helpers (get_git_commit, get_git_log)
|
||||
- `theme.py` - palette definitions, font loading, scale, load_from_config/save_to_config
|
||||
- `gemini.py` - legacy standalone Gemini wrapper (not used by the main GUI; superseded by `ai_client.py`)
|
||||
- `file_cache.py` - stub; Anthropic Files API path removed; kept so stale imports don't break
|
||||
- `mcp_client.py` - MCP-style read-only file tools (read_file, list_directory, search_files, get_file_summary); allowlist enforced against project file_items + base_dirs; dispatched by ai_client tool-use loop for both Anthropic and Gemini
|
||||
- `mcp_client.py` - MCP-style tools (read_file, list_directory, search_files, get_file_summary, web_search, fetch_url); allowlist enforced against project file_items + base_dirs for file tools; web tools are unrestricted; dispatched by ai_client tool-use loop for both Anthropic and Gemini
|
||||
- `summarize.py` - local heuristic summariser (no AI); .py via AST, .toml via regex, .md headings, generic preview; used by mcp_client.get_file_summary and aggregate.build_summary_section
|
||||
- `config.toml` - global-only settings: [ai] provider+model+system_prompt, [theme] palette+font+scale, [projects] paths array + active path
|
||||
- `manual_slop.toml` - per-project file: [project] name+git_dir+system_prompt+main_context, [output] namespace+output_dir, [files] base_dir+paths, [screenshots] base_dir+paths, [discussion] roles+active+[discussion.discussions.<name>] git_commit+last_updated+history
|
||||
@@ -79,7 +79,7 @@ Is a local GUI tool for manually curating and sending context to AI APIs. It agg
|
||||
- Both Gemini and Anthropic are configured with a `run_powershell` tool/function declaration
|
||||
- When the AI wants to edit or create files it emits a tool call with a `script` string
|
||||
- `ai_client` runs a loop (max `MAX_TOOL_ROUNDS = 10`) feeding tool results back until the AI stops calling tools
|
||||
- Before any script runs, `gui.py` shows a modal `ConfirmDialog` on the main thread; the background send thread blocks on a `threading.Event` until the user clicks Approve or Reject
|
||||
- Before any script runs, `gui_legacy.py` shows a modal `ConfirmDialog` on the main thread; the background send thread blocks on a `threading.Event` until the user clicks Approve or Reject
|
||||
- The dialog displays `base_dir`, shows the script in an editable text box (allowing last-second tweaks), and has Approve & Run / Reject buttons
|
||||
- On approval the (possibly edited) script is passed to `shell_runner.run_powershell()` which prepends `Set-Location -LiteralPath '<base_dir>'` and runs it via `powershell -NoProfile -NonInteractive -Command`
|
||||
- stdout, stderr, and exit code are returned to the AI as the tool result
|
||||
@@ -87,7 +87,7 @@ Is a local GUI tool for manually curating and sending context to AI APIs. It agg
|
||||
- All tool calls (script + result/rejection) are appended to `_tool_log` and displayed in the Tool Calls panel
|
||||
|
||||
**Dynamic file context refresh (ai_client.py):**
|
||||
- After the last tool call in each round, all project files from `file_items` are re-read from disk via `_reread_file_items()`. The `file_items` variable is reassigned so subsequent rounds see fresh content.
|
||||
- After the last tool call in each round, project files from `file_items` are checked via `_reread_file_items()`. It uses `mtime` to only re-read modified files, returning only the `changed` files to build a minimal `[FILES UPDATED]` block.
|
||||
- For Anthropic: the refreshed file contents are injected as a `text` block appended to the `tool_results` user message, prefixed with `[FILES UPDATED]` and an instruction not to re-read them.
|
||||
- For Gemini: refreshed file contents are appended to the last function response's `output` string as a `[SYSTEM: FILES UPDATED]` block. On the next tool round, stale `[FILES UPDATED]` blocks are stripped from history and old tool outputs are truncated to `_history_trunc_limit` characters to control token growth.
|
||||
- `_build_file_context_text(file_items)` formats the refreshed files as markdown code blocks (same format as the original context)
|
||||
@@ -107,10 +107,10 @@ Is a local GUI tool for manually curating and sending context to AI APIs. It agg
|
||||
- Entry fields: `ts` (HH:MM:SS), `direction` (OUT/IN), `kind`, `provider`, `model`, `payload` (dict)
|
||||
- Anthropic responses also include `usage` (input_tokens, output_tokens, cache_creation_input_tokens, cache_read_input_tokens) and `stop_reason` in payload
|
||||
- `get_comms_log()` returns a snapshot; `clear_comms_log()` empties it
|
||||
- `comms_log_callback` (injected by gui.py) is called from the background thread with each new entry; gui queues entries in `_pending_comms` (lock-protected) and flushes them to the DPG panel each render frame
|
||||
- `COMMS_CLAMP_CHARS = 300` in gui.py governs the display cutoff for heavy text fields
|
||||
- `comms_log_callback` (injected by gui_legacy.py) is called from the background thread with each new entry; gui queues entries in `_pending_comms` (lock-protected) and flushes them to the DPG panel each render frame
|
||||
- `COMMS_CLAMP_CHARS = 300` in gui_legacy.py governs the display cutoff for heavy text fields
|
||||
|
||||
**Comms History panel — rich structured rendering (gui.py):**
|
||||
**Comms History panel — rich structured rendering (gui_legacy.py):**
|
||||
|
||||
Rather than showing raw JSON, each comms entry is rendered using a kind-specific renderer function. Unknown kinds fall back to a generic key/value layout.
|
||||
|
||||
@@ -141,10 +141,12 @@ Entry layout: index + timestamp + direction + kind + provider/model header row,
|
||||
- `log_tool_call(script, result, script_path)` writes the script to `scripts/generated/<ts>_<seq:04d>.ps1` and appends a markdown record to the toolcalls log without the script body (just the file path + result); uses a `threading.Lock` for the sequence counter
|
||||
- `close_session()` flushes and closes both file handles; called just before `dpg.destroy_context()`
|
||||
|
||||
**Anthropic prompt caching:**
|
||||
**Anthropic prompt caching & history management:**
|
||||
- System prompt + context are combined into one string, chunked into <=120k char blocks, and sent as the `system=` parameter array. Only the LAST chunk gets `cache_control: ephemeral`, so the entire system prefix is cached as one unit.
|
||||
- Last tool in `_ANTHROPIC_TOOLS` (`run_powershell`) has `cache_control: ephemeral`; this means the tools prefix is cached together with the system prefix after the first request.
|
||||
- The user message is sent as a plain `[{"type": "text", "text": user_message}]` block with NO cache_control. The context lives in `system=`, not in the first user message.
|
||||
- `_add_history_cache_breakpoint` places `cache_control:ephemeral` on the last content block of the second-to-last user message, using the 4th cache breakpoint to cache the conversation history prefix.
|
||||
- `_trim_anthropic_history` uses token estimation (`_CHARS_PER_TOKEN = 3.5`) to keep the prompt under `_ANTHROPIC_MAX_PROMPT_TOKENS = 180_000`. It strips stale file refreshes from old turns, and drops oldest turn pairs if still over budget.
|
||||
- The tools list is built once per session via `_get_anthropic_tools()` and reused across all API calls within the tool loop, avoiding redundant Python-side reconstruction.
|
||||
- `_strip_cache_controls()` removes stale `cache_control` markers from all history entries before each API call, ensuring only the stable system/tools prefix consumes cache breakpoint slots.
|
||||
- Cache stats (creation tokens, read tokens) are surfaced in the comms log usage dict and displayed in the Comms History panel
|
||||
@@ -180,26 +182,30 @@ Entry layout: index + timestamp + direction + kind + provider/model header row,
|
||||
**MCP file tools (mcp_client.py + ai_client.py):**
|
||||
- Four read-only tools exposed to the AI as native function/tool declarations: `read_file`, `list_directory`, `search_files`, `get_file_summary`
|
||||
- Access control: `mcp_client.configure(file_items, extra_base_dirs)` is called before each send; builds an allowlist of resolved absolute paths from the project's `file_items` plus the `base_dir`; any path that is not explicitly in the list or not under one of the allowed directories returns `ACCESS DENIED`
|
||||
- `mcp_client.dispatch(tool_name, tool_input)` is the single dispatch entry point used by both Anthropic and Gemini tool-use loops
|
||||
- `mcp_client.dispatch(tool_name, tool_input)` is the single dispatch entry point used by both Anthropic and Gemini tool-use loops; `TOOL_NAMES` set now includes all six tool names
|
||||
- Anthropic: MCP tools appear before `run_powershell` in the tools list (no `cache_control` on them; only `run_powershell` carries `cache_control: ephemeral`)
|
||||
- Gemini: MCP tools are included in the `FunctionDeclaration` list alongside `run_powershell`
|
||||
- `get_file_summary` uses `summarize.summarise_file()` — same heuristic used for the initial `<context>` block, so the AI gets the same compact structural view it already knows
|
||||
- `list_directory` sorts dirs before files; shows name, type, and size
|
||||
- `search_files` uses `Path.glob()` with the caller-supplied pattern (supports `**/*.py` style)
|
||||
- `read_file` returns raw UTF-8 text; errors (not found, access denied, decode error) are returned as error strings rather than exceptions, so the AI sees them as tool results
|
||||
- `web_search(query)` queries DuckDuckGo HTML endpoint and returns the top 5 results (title, URL, snippet) as a formatted string; uses a custom `_DDGParser` (HTMLParser subclass)
|
||||
- `fetch_url(url)` fetches a URL, strips HTML tags/scripts via `_TextExtractor` (HTMLParser subclass), collapses whitespace, and truncates to 40k chars to prevent context blowup; handles DuckDuckGo redirect links automatically
|
||||
- `summarize.py` heuristics: `.py` → AST imports + ALL_CAPS constants + classes+methods + top-level functions; `.toml` → table headers + top-level keys; `.md` → h1–h3 headings with indentation; all others → line count + first 8 lines preview
|
||||
- Comms log: MCP tool calls log `OUT/tool_call` with `{"name": ..., "args": {...}}` and `IN/tool_result` with `{"name": ..., "output": ...}`; rendered in the Comms History panel via `_render_payload_tool_call` (shows each arg key/value) and `_render_payload_tool_result` (shows output)
|
||||
|
||||
**Known extension points:**
|
||||
- Add more providers by adding a section to `credentials.toml`, a `_list_*` and `_send_*` function in `ai_client.py`, and the provider name to the `PROVIDERS` list in `gui.py`
|
||||
- Add more providers by adding a section to `credentials.toml`, a `_list_*` and `_send_*` function in `ai_client.py`, and the provider name to the `PROVIDERS` list in `gui_legacy.py`
|
||||
- Discussion history excerpts could be individually toggleable for inclusion in the generated md
|
||||
- `MAX_TOOL_ROUNDS` in `ai_client.py` caps agentic loops at 10 rounds; adjustable
|
||||
- `COMMS_CLAMP_CHARS` in `gui.py` controls the character threshold for clamping heavy payload fields in the Comms History panel
|
||||
- `COMMS_CLAMP_CHARS` in gui_legacy.py controls the character threshold for clamping heavy payload fields in the Comms History panel
|
||||
- Additional project metadata (description, tags, created date) could be added to `[project]` in the per-project toml
|
||||
|
||||
### Gemini Context Management
|
||||
- Gemini uses explicit caching via `client.caches.create()` to store the `system_instruction` + tools as an immutable cached prefix with a 1-hour TTL. The cache is created once per chat session.
|
||||
- Proactively rebuilds cache at 90% of `_GEMINI_CACHE_TTL = 3600` to avoid stale-reference errors.
|
||||
- When context changes (detected via `md_content` hash), the old cache is deleted, a new cache is created, and chat history is migrated to a fresh chat session pointing at the new cache.
|
||||
- Trims history by dropping oldest pairs if input tokens exceed `_GEMINI_MAX_INPUT_TOKENS = 900_000`.
|
||||
- If cache creation fails (e.g., content is under the minimum token threshold — 1024 for Flash, 4096 for Pro), the system falls back to inline `system_instruction` in the chat config. Implicit caching may still provide cost savings in this case.
|
||||
- The `<context>` block lives inside `system_instruction`, NOT in user messages, preventing history bloat across turns.
|
||||
- On cleanup/exit, active caches are deleted via `ai_client.cleanup()` to prevent orphaned billing.
|
||||
@@ -216,7 +222,7 @@ Entry layout: index + timestamp + direction + kind + provider/model header row,
|
||||
|
||||
|
||||
## Recent Changes (Text Viewer Maximization)
|
||||
- **Global Text Viewer (gui.py)**: Added a dedicated, large popup window (win_text_viewer) to allow reading and scrolling through large, dense text blocks without feeling cramped.
|
||||
- **Global Text Viewer (gui_legacy.py)**: Added a dedicated, large popup window (win_text_viewer) to allow reading and scrolling through large, dense text blocks without feeling cramped.
|
||||
- **Comms History**: Every multi-line text field in the comms log now has a [+] button next to its label that opens the text in the Global Text Viewer.
|
||||
- **Tool Log History**: Added [+ Script] and [+ Output] buttons next to each logged tool call to easily maximize and read the full executed scripts and raw tool outputs.
|
||||
- **Last Script Output Popup**: Expanded the default size of the popup (now 800x600) and gave the input script panel more vertical space to prevent it from feeling 'scrunched'. Added [+ Maximize] buttons for both the script and the output sections to inspect them in full detail.
|
||||
@@ -244,3 +250,34 @@ Documentation has been completely rewritten matching the strict, structural form
|
||||
- `docs/guide_architecture.md`: Details the Python implementation algorithms, queue management for UI rendering, the specific AST heuristics used for context aggregation, and the distinct algorithms for trimming Anthropic history vs Gemini state caching.
|
||||
- `docs/Readme.md`: The core interface manual.
|
||||
- `docs/guide_tools.md`: Security architecture for `_is_allowed` paths and definitions of the read-only vs destructive tool pipeline.
|
||||
|
||||
|
||||
|
||||
|
||||
## Updates (2026-02-22 — ai_client.py & aggregate.py)
|
||||
|
||||
### mcp_client.py — Web Tools Added
|
||||
- `web_search(query)` and `fetch_url(url)` added as two new MCP tools alongside the existing four file tools.
|
||||
- `TOOL_NAMES` set updated to include all six tool names for dispatch routing.
|
||||
- `MCP_TOOL_SPECS` list extended with full JSON schema definitions for both web tools.
|
||||
- Both tools are declared in `_build_anthropic_tools()` and `_gemini_tool_declaration()` so they are available to both providers.
|
||||
- Web tools bypass the `_is_allowed` path check (no filesystem access); file tools retain the allowlist enforcement.
|
||||
|
||||
### aggregate.py — run() double-I/O elimination
|
||||
- `run()` now calls `build_file_items()` once, then passes the result to `build_markdown_from_items()` instead of calling `build_files_section()` separately. This avoids reading every file twice per send.
|
||||
- `build_markdown_from_items()` accepts a `summary_only` flag (default `False`); when `False` it inlines full file content; when `True` it delegates to `summarize.build_summary_markdown()` for compact structural summaries.
|
||||
- `run()` returns a 3-tuple `(markdown_str, output_path, file_items)` — the `file_items` list is passed through to `gui_legacy.py` as `self.last_file_items` for dynamic context refresh after tool calls.
|
||||
|
||||
|
||||
## Updates (2026-02-22 — gui_legacy.py [+ Maximize] bug fix)
|
||||
|
||||
### Problem
|
||||
Three `[+ Maximize]` buttons were reading their text content via `dpg.get_value(tag)` at click time:
|
||||
1. `ConfirmDialog.show()` — passed `f"{self._tag}_script"` as `user_data` and called `dpg.get_value(u)` in the lambda. If the dialog was dismissed before the viewer opened, the item no longer existed and the call would fail silently or crash.
|
||||
2. `win_script_output` Script `[+ Maximize]` — used `user_data="last_script_text"` and `dpg.get_value(u)`. When word-wrap is ON, `last_script_text` is hidden (`show=False`); in some DPG versions `dpg.get_value` on a hidden `input_text` returns `""`.
|
||||
3. `win_script_output` Output `[+ Maximize]` — same issue with `"last_script_output"`.
|
||||
|
||||
### Fix
|
||||
- `ConfirmDialog.show()`: changed `user_data` to `self._script` (the actual text string captured at button-creation time) and the callback to `lambda s, a, u: _show_text_viewer("Confirm Script", u)`. The text is now baked in at dialog construction, not read from a potentially-deleted widget.
|
||||
- `App._append_tool_log()`: added `self._last_script = script` and `self._last_output = result` assignments so the latest values are always available as instance state.
|
||||
- `win_script_output` buttons: both `[+ Maximize]` buttons now use `lambda s, a, u: _show_text_viewer("...", self._last_script/output)` directly, bypassing DPG widget state entirely.
|
||||
|
||||
11
Readme.md
11
Readme.md
@@ -21,6 +21,15 @@ Features:
|
||||
* Popup text viewers for large script/output inspection.
|
||||
* Color theming and UI scaling.
|
||||
|
||||
## Session-Based Logging and Management
|
||||
|
||||
Manual Slop organizes all communications and tool interactions into session-based directories under `logs/`. This ensures a clean history and easy debugging.
|
||||
|
||||
* **Organized Storage:** Each session is assigned a unique ID and its own sub-directory containing communication logs (`comms.log`) and metadata.
|
||||
* **Log Management Panel:** The GUI includes a dedicated 'Log Management' panel where you can view session history, inspect metadata (message counts, errors, size), and protect important sessions.
|
||||
* **Automated Pruning:** To keep the workspace clean, the application automatically prunes insignificant logs. Sessions older than 24 hours that are not "whitelisted" and are smaller than 2KB are automatically deleted.
|
||||
* **Whitelisting:** Sessions containing errors, high activity, or significant changes are automatically whitelisted. Users can also manually whitelist sessions via the GUI to prevent them from being pruned.
|
||||
|
||||
## Documentation
|
||||
|
||||
* [docs/Readme.md](docs/Readme.md) for the interface and usage guide
|
||||
@@ -41,5 +50,5 @@ api_key = "****"
|
||||
2. Have fun. This is experiemntal slop.
|
||||
|
||||
```ps1
|
||||
uv run .\gui.py
|
||||
uv run .\gui_2.py
|
||||
```
|
||||
|
||||
402
aggregate.py
402
aggregate.py
@@ -1,4 +1,5 @@
|
||||
# aggregate.py
|
||||
from __future__ import annotations
|
||||
"""
|
||||
Note(Gemini):
|
||||
This module orchestrates the construction of the final Markdown context string.
|
||||
@@ -15,81 +16,94 @@ import tomllib
|
||||
import re
|
||||
import glob
|
||||
from pathlib import Path, PureWindowsPath
|
||||
from typing import Any
|
||||
import summarize
|
||||
import project_manager
|
||||
from file_cache import ASTParser
|
||||
|
||||
def find_next_increment(output_dir: Path, namespace: str) -> int:
|
||||
pattern = re.compile(rf"^{re.escape(namespace)}_(\d+)\.md$")
|
||||
max_num = 0
|
||||
for f in output_dir.iterdir():
|
||||
if f.is_file():
|
||||
match = pattern.match(f.name)
|
||||
if match:
|
||||
max_num = max(max_num, int(match.group(1)))
|
||||
return max_num + 1
|
||||
pattern = re.compile(rf"^{re.escape(namespace)}_(\d+)\.md$")
|
||||
max_num = 0
|
||||
for f in output_dir.iterdir():
|
||||
if f.is_file():
|
||||
match = pattern.match(f.name)
|
||||
if match:
|
||||
max_num = max(max_num, int(match.group(1)))
|
||||
return max_num + 1
|
||||
|
||||
def is_absolute_with_drive(entry: str) -> bool:
|
||||
try:
|
||||
p = PureWindowsPath(entry)
|
||||
return p.drive != ""
|
||||
except Exception:
|
||||
return False
|
||||
try:
|
||||
p = PureWindowsPath(entry)
|
||||
return p.drive != ""
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def resolve_paths(base_dir: Path, entry: str) -> list[Path]:
|
||||
has_drive = is_absolute_with_drive(entry)
|
||||
is_wildcard = "*" in entry
|
||||
if is_wildcard:
|
||||
root = Path(entry) if has_drive else base_dir / entry
|
||||
matches = [Path(p) for p in glob.glob(str(root), recursive=True) if Path(p).is_file()]
|
||||
return sorted(matches)
|
||||
else:
|
||||
if has_drive:
|
||||
return [Path(entry)]
|
||||
return [(base_dir / entry).resolve()]
|
||||
has_drive = is_absolute_with_drive(entry)
|
||||
is_wildcard = "*" in entry
|
||||
matches = []
|
||||
if is_wildcard:
|
||||
root = Path(entry) if has_drive else base_dir / entry
|
||||
matches = [Path(p) for p in glob.glob(str(root), recursive=True) if Path(p).is_file()]
|
||||
else:
|
||||
p = Path(entry) if has_drive else (base_dir / entry).resolve()
|
||||
matches = [p]
|
||||
# Blacklist filter
|
||||
filtered = []
|
||||
for p in matches:
|
||||
name = p.name.lower()
|
||||
if name == "history.toml" or name.endswith("_history.toml"):
|
||||
continue
|
||||
filtered.append(p)
|
||||
return sorted(filtered)
|
||||
|
||||
def build_discussion_section(history: list[str]) -> str:
|
||||
sections = []
|
||||
for i, paste in enumerate(history, start=1):
|
||||
sections.append(f"### Discussion Excerpt {i}\n\n{paste.strip()}")
|
||||
return "\n\n---\n\n".join(sections)
|
||||
sections = []
|
||||
for i, paste in enumerate(history, start=1):
|
||||
sections.append(f"### Discussion Excerpt {i}\n\n{paste.strip()}")
|
||||
return "\n\n---\n\n".join(sections)
|
||||
|
||||
def build_files_section(base_dir: Path, files: list[str]) -> str:
|
||||
sections = []
|
||||
for entry in files:
|
||||
paths = resolve_paths(base_dir, entry)
|
||||
if not paths:
|
||||
sections.append(f"### `{entry}`\n\n```text\nERROR: no files matched: {entry}\n```")
|
||||
continue
|
||||
for path in paths:
|
||||
suffix = path.suffix.lstrip(".")
|
||||
lang = suffix if suffix else "text"
|
||||
try:
|
||||
content = path.read_text(encoding="utf-8")
|
||||
except FileNotFoundError:
|
||||
content = f"ERROR: file not found: {path}"
|
||||
except Exception as e:
|
||||
content = f"ERROR: {e}"
|
||||
original = entry if "*" not in entry else str(path)
|
||||
sections.append(f"### `{original}`\n\n```{lang}\n{content}\n```")
|
||||
return "\n\n---\n\n".join(sections)
|
||||
def build_files_section(base_dir: Path, files: list[str | dict[str, Any]]) -> str:
|
||||
sections = []
|
||||
for entry_raw in files:
|
||||
if isinstance(entry_raw, dict):
|
||||
entry = entry_raw.get("path")
|
||||
else:
|
||||
entry = entry_raw
|
||||
paths = resolve_paths(base_dir, entry)
|
||||
if not paths:
|
||||
sections.append(f"### `{entry}`\n\n```text\nERROR: no files matched: {entry}\n```")
|
||||
continue
|
||||
for path in paths:
|
||||
suffix = path.suffix.lstrip(".")
|
||||
lang = suffix if suffix else "text"
|
||||
try:
|
||||
content = path.read_text(encoding="utf-8")
|
||||
except FileNotFoundError:
|
||||
content = f"ERROR: file not found: {path}"
|
||||
except Exception as e:
|
||||
content = f"ERROR: {e}"
|
||||
original = entry if "*" not in entry else str(path)
|
||||
sections.append(f"### `{original}`\n\n```{lang}\n{content}\n```")
|
||||
return "\n\n---\n\n".join(sections)
|
||||
|
||||
def build_screenshots_section(base_dir: Path, screenshots: list[str]) -> str:
|
||||
sections = []
|
||||
for entry in screenshots:
|
||||
paths = resolve_paths(base_dir, entry)
|
||||
if not paths:
|
||||
sections.append(f"### `{entry}`\n\n_ERROR: no files matched: {entry}_")
|
||||
continue
|
||||
for path in paths:
|
||||
original = entry if "*" not in entry else str(path)
|
||||
if not path.exists():
|
||||
sections.append(f"### `{original}`\n\n_ERROR: file not found: {path}_")
|
||||
continue
|
||||
sections.append(f"### `{original}`\n\n})")
|
||||
return "\n\n---\n\n".join(sections)
|
||||
sections = []
|
||||
for entry in screenshots:
|
||||
paths = resolve_paths(base_dir, entry)
|
||||
if not paths:
|
||||
sections.append(f"### `{entry}`\n\n_ERROR: no files matched: {entry}_")
|
||||
continue
|
||||
for path in paths:
|
||||
original = entry if "*" not in entry else str(path)
|
||||
if not path.exists():
|
||||
sections.append(f"### `{original}`\n\n_ERROR: file not found: {path}_")
|
||||
continue
|
||||
sections.append(f"### `{original}`\n\n})")
|
||||
return "\n\n---\n\n".join(sections)
|
||||
|
||||
|
||||
def build_file_items(base_dir: Path, files: list[str]) -> list[dict]:
|
||||
"""
|
||||
def build_file_items(base_dir: Path, files: list[str | dict[str, Any]]) -> list[dict[str, Any]]:
|
||||
"""
|
||||
Return a list of dicts describing each file, for use by ai_client when it
|
||||
wants to upload individual files rather than inline everything as markdown.
|
||||
|
||||
@@ -98,78 +112,216 @@ def build_file_items(base_dir: Path, files: list[str]) -> list[dict]:
|
||||
entry : str (original config entry string)
|
||||
content : str (file text, or error string)
|
||||
error : bool
|
||||
mtime : float (last modification time, for skip-if-unchanged optimization)
|
||||
tier : int | None (optional tier for context management)
|
||||
"""
|
||||
items = []
|
||||
for entry in files:
|
||||
paths = resolve_paths(base_dir, entry)
|
||||
if not paths:
|
||||
items.append({"path": None, "entry": entry, "content": f"ERROR: no files matched: {entry}", "error": True})
|
||||
continue
|
||||
for path in paths:
|
||||
try:
|
||||
content = path.read_text(encoding="utf-8")
|
||||
error = False
|
||||
except FileNotFoundError:
|
||||
content = f"ERROR: file not found: {path}"
|
||||
error = True
|
||||
except Exception as e:
|
||||
content = f"ERROR: {e}"
|
||||
error = True
|
||||
items.append({"path": path, "entry": entry, "content": content, "error": error})
|
||||
return items
|
||||
items = []
|
||||
for entry_raw in files:
|
||||
if isinstance(entry_raw, dict):
|
||||
entry = entry_raw.get("path")
|
||||
tier = entry_raw.get("tier")
|
||||
else:
|
||||
entry = entry_raw
|
||||
tier = None
|
||||
paths = resolve_paths(base_dir, entry)
|
||||
if not paths:
|
||||
items.append({"path": None, "entry": entry, "content": f"ERROR: no files matched: {entry}", "error": True, "mtime": 0.0, "tier": tier})
|
||||
continue
|
||||
for path in paths:
|
||||
try:
|
||||
content = path.read_text(encoding="utf-8")
|
||||
mtime = path.stat().st_mtime
|
||||
error = False
|
||||
except FileNotFoundError:
|
||||
content = f"ERROR: file not found: {path}"
|
||||
mtime = 0.0
|
||||
error = True
|
||||
except Exception as e:
|
||||
content = f"ERROR: {e}"
|
||||
mtime = 0.0
|
||||
error = True
|
||||
items.append({"path": path, "entry": entry, "content": content, "error": error, "mtime": mtime, "tier": tier})
|
||||
return items
|
||||
|
||||
def build_summary_section(base_dir: Path, files: list[str]) -> str:
|
||||
"""
|
||||
def build_summary_section(base_dir: Path, files: list[str | dict[str, Any]]) -> str:
|
||||
"""
|
||||
Build a compact summary section using summarize.py — one short block per file.
|
||||
Used as the initial <context> block instead of full file contents.
|
||||
"""
|
||||
items = build_file_items(base_dir, files)
|
||||
return summarize.build_summary_markdown(items)
|
||||
items = build_file_items(base_dir, files)
|
||||
return summarize.build_summary_markdown(items)
|
||||
|
||||
def build_static_markdown(base_dir: Path, files: list[str], screenshot_base_dir: Path, screenshots: list[str], summary_only: bool = False) -> str:
|
||||
parts = []
|
||||
if files:
|
||||
if summary_only:
|
||||
parts.append("## Files (Summary)\n\n" + build_summary_section(base_dir, files))
|
||||
else:
|
||||
parts.append("## Files\n\n" + build_files_section(base_dir, files))
|
||||
if screenshots:
|
||||
parts.append("## Screenshots\n\n" + build_screenshots_section(screenshot_base_dir, screenshots))
|
||||
return "\n\n---\n\n".join(parts) if parts else ""
|
||||
def _build_files_section_from_items(file_items: list[dict[str, Any]]) -> str:
|
||||
"""Build the files markdown section from pre-read file items (avoids double I/O)."""
|
||||
sections = []
|
||||
for item in file_items:
|
||||
path = item.get("path")
|
||||
entry = item.get("entry", "unknown")
|
||||
content = item.get("content", "")
|
||||
if path is None:
|
||||
sections.append(f"### `{entry}`\n\n```text\n{content}\n```")
|
||||
continue
|
||||
suffix = path.suffix.lstrip(".") if hasattr(path, "suffix") else "text"
|
||||
lang = suffix if suffix else "text"
|
||||
original = entry if "*" not in entry else str(path)
|
||||
sections.append(f"### `{original}`\n\n```{lang}\n{content}\n```")
|
||||
return "\n\n---\n\n".join(sections)
|
||||
|
||||
def build_dynamic_markdown(history: list[str]) -> str:
|
||||
return "## Discussion History\n\n" + build_discussion_section(history) if history else ""
|
||||
def build_markdown_from_items(file_items: list[dict[str, Any]], screenshot_base_dir: Path, screenshots: list[str], history: list[str], summary_only: bool = False) -> str:
|
||||
"""Build markdown from pre-read file items instead of re-reading from disk."""
|
||||
parts = []
|
||||
# STATIC PREFIX: Files and Screenshots must go first to maximize Cache Hits
|
||||
if file_items:
|
||||
if summary_only:
|
||||
parts.append("## Files (Summary)\n\n" + summarize.build_summary_markdown(file_items))
|
||||
else:
|
||||
parts.append("## Files\n\n" + _build_files_section_from_items(file_items))
|
||||
if screenshots:
|
||||
parts.append("## Screenshots\n\n" + build_screenshots_section(screenshot_base_dir, screenshots))
|
||||
# DYNAMIC SUFFIX: History changes every turn, must go last
|
||||
if history:
|
||||
parts.append("## Discussion History\n\n" + build_discussion_section(history))
|
||||
return "\n\n---\n\n".join(parts)
|
||||
|
||||
def run(config: dict) -> tuple[str, str, Path, list[dict]]:
|
||||
namespace = config.get("project", {}).get("name")
|
||||
if not namespace:
|
||||
namespace = config.get("output", {}).get("namespace", "project")
|
||||
output_dir = Path(config["output"]["output_dir"])
|
||||
base_dir = Path(config["files"]["base_dir"])
|
||||
files = config["files"].get("paths", [])
|
||||
screenshot_base_dir = Path(config.get("screenshots", {}).get("base_dir", "."))
|
||||
screenshots = config.get("screenshots", {}).get("paths", [])
|
||||
history = config.get("discussion", {}).get("history", [])
|
||||
def build_markdown_no_history(file_items: list[dict[str, Any]], screenshot_base_dir: Path, screenshots: list[str], summary_only: bool = False) -> str:
|
||||
"""Build markdown with only files + screenshots (no history). Used for stable caching."""
|
||||
return build_markdown_from_items(file_items, screenshot_base_dir, screenshots, history=[], summary_only=summary_only)
|
||||
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
increment = find_next_increment(output_dir, namespace)
|
||||
output_file = output_dir / f"{namespace}_{increment:03d}.md"
|
||||
|
||||
static_md = build_static_markdown(base_dir, files, screenshot_base_dir, screenshots, summary_only=False)
|
||||
dynamic_md = build_dynamic_markdown(history)
|
||||
|
||||
markdown = f"{static_md}\n\n---\n\n{dynamic_md}" if static_md and dynamic_md else static_md or dynamic_md
|
||||
output_file.write_text(markdown, encoding="utf-8")
|
||||
|
||||
file_items = build_file_items(base_dir, files)
|
||||
return static_md, dynamic_md, output_file, file_items
|
||||
def build_discussion_text(history: list[str]) -> str:
|
||||
"""Build just the discussion history section text. Returns empty string if no history."""
|
||||
if not history:
|
||||
return ""
|
||||
return "## Discussion History\n\n" + build_discussion_section(history)
|
||||
|
||||
def main():
|
||||
with open("config.toml", "rb") as f:
|
||||
import tomllib
|
||||
config = tomllib.load(f)
|
||||
static_md, dynamic_md, output_file, _ = run(config)
|
||||
print(f"Written: {output_file}")
|
||||
def build_tier1_context(file_items: list[dict[str, Any]], screenshot_base_dir: Path, screenshots: list[str], history: list[str]) -> str:
|
||||
"""
|
||||
Tier 1 Context: Strategic/Orchestration.
|
||||
Full content for core conductor files and files with tier=1, summaries for others.
|
||||
"""
|
||||
core_files = {"product.md", "tech-stack.md", "workflow.md", "tracks.md"}
|
||||
parts = []
|
||||
# Files section
|
||||
if file_items:
|
||||
sections = []
|
||||
for item in file_items:
|
||||
path = item.get("path")
|
||||
name = path.name if path else ""
|
||||
if name in core_files or item.get("tier") == 1:
|
||||
# Include in full
|
||||
sections.append("### `" + (item.get("entry") or str(path)) + "`\n\n" +
|
||||
f"```{path.suffix.lstrip('.') if path.suffix else 'text'}\n{item.get('content', '')}\n```")
|
||||
else:
|
||||
# Summarize
|
||||
sections.append("### `" + (item.get("entry") or str(path)) + "`\n\n" +
|
||||
summarize.summarise_file(path, item.get("content", "")))
|
||||
parts.append("## Files (Tier 1 - Mixed)\n\n" + "\n\n---\n\n".join(sections))
|
||||
if screenshots:
|
||||
parts.append("## Screenshots\n\n" + build_screenshots_section(screenshot_base_dir, screenshots))
|
||||
if history:
|
||||
parts.append("## Discussion History\n\n" + build_discussion_section(history))
|
||||
return "\n\n---\n\n".join(parts)
|
||||
|
||||
def build_tier2_context(file_items: list[dict[str, Any]], screenshot_base_dir: Path, screenshots: list[str], history: list[str]) -> str:
|
||||
"""
|
||||
Tier 2 Context: Architectural/Tech Lead.
|
||||
Full content for all files (standard behavior).
|
||||
"""
|
||||
return build_markdown_from_items(file_items, screenshot_base_dir, screenshots, history, summary_only=False)
|
||||
|
||||
def build_tier3_context(file_items: list[dict[str, Any]], screenshot_base_dir: Path, screenshots: list[str], history: list[str], focus_files: list[str]) -> str:
|
||||
"""
|
||||
Tier 3 Context: Execution/Worker.
|
||||
Full content for focus_files and files with tier=3, summaries/skeletons for others.
|
||||
"""
|
||||
parts = []
|
||||
if file_items:
|
||||
sections = []
|
||||
for item in file_items:
|
||||
path = item.get("path")
|
||||
entry = item.get("entry", "")
|
||||
path_str = str(path) if path else ""
|
||||
# Check if this file is in focus_files (by name or path)
|
||||
is_focus = False
|
||||
for focus in focus_files:
|
||||
if focus == entry or (path and focus == path.name) or focus in path_str:
|
||||
is_focus = True
|
||||
break
|
||||
if is_focus or item.get("tier") == 3:
|
||||
sections.append("### `" + (entry or path_str) + "`\n\n" +
|
||||
f"```{path.suffix.lstrip('.') if path and path.suffix else 'text'}\n{item.get('content', '')}\n```")
|
||||
else:
|
||||
content = item.get("content", "")
|
||||
if path and path.suffix == ".py" and not item.get("error"):
|
||||
try:
|
||||
parser = ASTParser("python")
|
||||
skeleton = parser.get_skeleton(content)
|
||||
sections.append(f"### `{entry or path_str}` (AST Skeleton)\n\n```python\n{skeleton}\n```")
|
||||
except Exception as e:
|
||||
# Fallback to summary if AST parsing fails
|
||||
sections.append(f"### `{entry or path_str}`\n\n" + summarize.summarise_file(path, content))
|
||||
else:
|
||||
sections.append(f"### `{entry or path_str}`\n\n" + summarize.summarise_file(path, content))
|
||||
parts.append("## Files (Tier 3 - Focused)\n\n" + "\n\n---\n\n".join(sections))
|
||||
if screenshots:
|
||||
parts.append("## Screenshots\n\n" + build_screenshots_section(screenshot_base_dir, screenshots))
|
||||
if history:
|
||||
parts.append("## Discussion History\n\n" + build_discussion_section(history))
|
||||
return "\n\n---\n\n".join(parts)
|
||||
|
||||
def build_markdown(base_dir: Path, files: list[str | dict[str, Any]], screenshot_base_dir: Path, screenshots: list[str], history: list[str], summary_only: bool = False) -> str:
|
||||
parts = []
|
||||
# STATIC PREFIX: Files and Screenshots must go first to maximize Cache Hits
|
||||
if files:
|
||||
if summary_only:
|
||||
parts.append("## Files (Summary)\n\n" + build_summary_section(base_dir, files))
|
||||
else:
|
||||
parts.append("## Files\n\n" + build_files_section(base_dir, files))
|
||||
if screenshots:
|
||||
parts.append("## Screenshots\n\n" + build_screenshots_section(screenshot_base_dir, screenshots))
|
||||
# DYNAMIC SUFFIX: History changes every turn, must go last
|
||||
if history:
|
||||
parts.append("## Discussion History\n\n" + build_discussion_section(history))
|
||||
return "\n\n---\n\n".join(parts)
|
||||
|
||||
def run(config: dict[str, Any]) -> tuple[str, Path, list[dict[str, Any]]]:
|
||||
namespace = config.get("project", {}).get("name")
|
||||
if not namespace:
|
||||
namespace = config.get("output", {}).get("namespace", "project")
|
||||
output_dir = Path(config["output"]["output_dir"])
|
||||
base_dir = Path(config["files"]["base_dir"])
|
||||
files = config["files"].get("paths", [])
|
||||
screenshot_base_dir = Path(config.get("screenshots", {}).get("base_dir", "."))
|
||||
screenshots = config.get("screenshots", {}).get("paths", [])
|
||||
history = config.get("discussion", {}).get("history", [])
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
increment = find_next_increment(output_dir, namespace)
|
||||
output_file = output_dir / f"{namespace}_{increment:03d}.md"
|
||||
# Build file items once, then construct markdown from them (avoids double I/O)
|
||||
file_items = build_file_items(base_dir, files)
|
||||
summary_only = config.get("project", {}).get("summary_only", False)
|
||||
markdown = build_markdown_from_items(file_items, screenshot_base_dir, screenshots, history,
|
||||
summary_only=summary_only)
|
||||
output_file.write_text(markdown, encoding="utf-8")
|
||||
return markdown, output_file, file_items
|
||||
|
||||
def main() -> None:
|
||||
# Load global config to find active project
|
||||
config_path = Path("config.toml")
|
||||
if not config_path.exists():
|
||||
print("config.toml not found.")
|
||||
return
|
||||
with open(config_path, "rb") as f:
|
||||
global_cfg = tomllib.load(f)
|
||||
active_path = global_cfg.get("projects", {}).get("active")
|
||||
if not active_path:
|
||||
print("No active project found in config.toml.")
|
||||
return
|
||||
# Use project_manager to load project (handles history segregation)
|
||||
proj = project_manager.load_project(active_path)
|
||||
# Use flat_config to make it compatible with aggregate.run()
|
||||
config = project_manager.flat_config(proj)
|
||||
markdown, output_file, _ = run(config)
|
||||
print(f"Written: {output_file}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
main()
|
||||
|
||||
2349
ai_client.py
2349
ai_client.py
File diff suppressed because it is too large
Load Diff
245
api_hook_client.py
Normal file
245
api_hook_client.py
Normal file
@@ -0,0 +1,245 @@
|
||||
from __future__ import annotations
|
||||
import requests
|
||||
import json
|
||||
import time
|
||||
from typing import Any
|
||||
|
||||
class ApiHookClient:
|
||||
def __init__(self, base_url: str = "http://127.0.0.1:8999", max_retries: int = 5, retry_delay: float = 0.2) -> None:
|
||||
self.base_url = base_url
|
||||
self.max_retries = max_retries
|
||||
self.retry_delay = retry_delay
|
||||
|
||||
def wait_for_server(self, timeout: float = 3) -> bool:
|
||||
"""
|
||||
Polls the /status endpoint until the server is ready or timeout is reached.
|
||||
"""
|
||||
start_time = time.time()
|
||||
while time.time() - start_time < timeout:
|
||||
try:
|
||||
if self.get_status().get('status') == 'ok':
|
||||
return True
|
||||
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout):
|
||||
time.sleep(0.1)
|
||||
return False
|
||||
|
||||
def _make_request(self, method: str, endpoint: str, data: dict | None = None, timeout: float | None = None) -> dict | None:
|
||||
url = f"{self.base_url}{endpoint}"
|
||||
headers = {'Content-Type': 'application/json'}
|
||||
last_exception = None
|
||||
# Increase default request timeout for local server
|
||||
req_timeout = timeout if timeout is not None else 2.0
|
||||
for attempt in range(self.max_retries + 1):
|
||||
try:
|
||||
if method == 'GET':
|
||||
response = requests.get(url, timeout=req_timeout)
|
||||
elif method == 'POST':
|
||||
response = requests.post(url, json=data, headers=headers, timeout=req_timeout)
|
||||
else:
|
||||
raise ValueError(f"Unsupported HTTP method: {method}")
|
||||
response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx)
|
||||
return response.json()
|
||||
except (requests.exceptions.Timeout, requests.exceptions.ConnectionError) as e:
|
||||
last_exception = e
|
||||
if attempt < self.max_retries:
|
||||
time.sleep(self.retry_delay)
|
||||
continue
|
||||
else:
|
||||
if isinstance(e, requests.exceptions.Timeout):
|
||||
raise requests.exceptions.Timeout(f"Request to {endpoint} timed out after {self.max_retries} retries.") from e
|
||||
else:
|
||||
raise requests.exceptions.ConnectionError(f"Could not connect to API hook server at {self.base_url} after {self.max_retries} retries.") from e
|
||||
except requests.exceptions.HTTPError as e:
|
||||
raise requests.exceptions.HTTPError(f"HTTP error {e.response.status_code} for {endpoint}: {e.response.text}") from e
|
||||
except json.JSONDecodeError as e:
|
||||
raise ValueError(f"Failed to decode JSON from response for {endpoint}: {response.text}") from e
|
||||
if last_exception:
|
||||
raise last_exception
|
||||
|
||||
def get_status(self) -> dict:
|
||||
"""Checks the health of the hook server."""
|
||||
url = f"{self.base_url}/status"
|
||||
try:
|
||||
response = requests.get(url, timeout=5.0)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
except Exception:
|
||||
raise requests.exceptions.ConnectionError(f"Could not reach /status at {self.base_url}")
|
||||
|
||||
def get_project(self) -> dict | None:
|
||||
return self._make_request('GET', '/api/project')
|
||||
|
||||
def post_project(self, project_data: dict) -> dict | None:
|
||||
return self._make_request('POST', '/api/project', data={'project': project_data})
|
||||
|
||||
def get_session(self) -> dict | None:
|
||||
return self._make_request('GET', '/api/session')
|
||||
|
||||
def get_mma_status(self) -> dict | None:
|
||||
"""Retrieves current MMA status (track, tickets, tier, etc.)"""
|
||||
return self._make_request('GET', '/api/gui/mma_status')
|
||||
|
||||
def push_event(self, event_type: str, payload: dict) -> dict | None:
|
||||
"""Pushes an event to the GUI's AsyncEventQueue via the /api/gui endpoint."""
|
||||
return self.post_gui({
|
||||
"action": event_type,
|
||||
"payload": payload
|
||||
})
|
||||
|
||||
def get_performance(self) -> dict | None:
|
||||
"""Retrieves UI performance metrics."""
|
||||
return self._make_request('GET', '/api/performance')
|
||||
|
||||
def post_session(self, session_entries: list) -> dict | None:
|
||||
return self._make_request('POST', '/api/session', data={'session': {'entries': session_entries}})
|
||||
|
||||
def post_gui(self, gui_data: dict) -> dict | None:
|
||||
return self._make_request('POST', '/api/gui', data=gui_data)
|
||||
|
||||
def select_tab(self, tab_bar: str, tab: str) -> dict | None:
|
||||
"""Tells the GUI to switch to a specific tab in a tab bar."""
|
||||
return self.post_gui({
|
||||
"action": "select_tab",
|
||||
"tab_bar": tab_bar,
|
||||
"tab": tab
|
||||
})
|
||||
|
||||
def select_list_item(self, listbox: str, item_value: str) -> dict | None:
|
||||
"""Tells the GUI to select an item in a listbox by its value."""
|
||||
return self.post_gui({
|
||||
"action": "select_list_item",
|
||||
"listbox": listbox,
|
||||
"item_value": item_value
|
||||
})
|
||||
|
||||
def set_value(self, item: str, value: Any) -> dict | None:
|
||||
"""Sets the value of a GUI item."""
|
||||
return self.post_gui({
|
||||
"action": "set_value",
|
||||
"item": item,
|
||||
"value": value
|
||||
})
|
||||
|
||||
def get_value(self, item: str) -> Any:
|
||||
"""Gets the value of a GUI item via its mapped field."""
|
||||
try:
|
||||
# First try direct field querying via POST
|
||||
res = self._make_request('POST', '/api/gui/value', data={"field": item})
|
||||
if res and "value" in res:
|
||||
v = res.get("value")
|
||||
if v is not None:
|
||||
return v
|
||||
except Exception:
|
||||
pass
|
||||
try:
|
||||
# Try GET fallback
|
||||
res = self._make_request('GET', f'/api/gui/value/{item}')
|
||||
if res and "value" in res:
|
||||
v = res.get("value")
|
||||
if v is not None:
|
||||
return v
|
||||
except Exception:
|
||||
pass
|
||||
try:
|
||||
# Fallback for thinking/live/prior which are in diagnostics
|
||||
diag = self._make_request('GET', '/api/gui/diagnostics')
|
||||
if item in diag:
|
||||
return diag[item]
|
||||
# Map common indicator tags to diagnostics keys
|
||||
mapping = {
|
||||
"thinking_indicator": "thinking",
|
||||
"operations_live_indicator": "live",
|
||||
"prior_session_indicator": "prior"
|
||||
}
|
||||
key = mapping.get(item)
|
||||
if key and key in diag:
|
||||
return diag[key]
|
||||
except Exception:
|
||||
pass
|
||||
return None
|
||||
|
||||
def get_text_value(self, item_tag: str) -> str | None:
|
||||
"""Wraps get_value and returns its string representation, or None."""
|
||||
val = self.get_value(item_tag)
|
||||
return str(val) if val is not None else None
|
||||
|
||||
def get_node_status(self, node_tag: str) -> Any:
|
||||
"""Wraps get_value for a DAG node or queries the diagnostic endpoint for its status."""
|
||||
val = self.get_value(node_tag)
|
||||
if val is not None:
|
||||
return val
|
||||
try:
|
||||
diag = self._make_request('GET', '/api/gui/diagnostics')
|
||||
if 'nodes' in diag and node_tag in diag['nodes']:
|
||||
return diag['nodes'][node_tag]
|
||||
if node_tag in diag:
|
||||
return diag[node_tag]
|
||||
except Exception:
|
||||
pass
|
||||
return None
|
||||
|
||||
def click(self, item: str, *args: Any, **kwargs: Any) -> dict | None:
|
||||
"""Simulates a click on a GUI button or item."""
|
||||
user_data = kwargs.pop('user_data', None)
|
||||
return self.post_gui({
|
||||
"action": "click",
|
||||
"item": item,
|
||||
"args": args,
|
||||
"kwargs": kwargs,
|
||||
"user_data": user_data
|
||||
})
|
||||
|
||||
def get_indicator_state(self, tag: str) -> dict:
|
||||
"""Checks if an indicator is shown using the diagnostics endpoint."""
|
||||
# Mapping tag to the keys used in diagnostics endpoint
|
||||
mapping = {
|
||||
"thinking_indicator": "thinking",
|
||||
"operations_live_indicator": "live",
|
||||
"prior_session_indicator": "prior"
|
||||
}
|
||||
key = mapping.get(tag, tag)
|
||||
try:
|
||||
diag = self._make_request('GET', '/api/gui/diagnostics')
|
||||
return {"tag": tag, "shown": diag.get(key, False)}
|
||||
except Exception as e:
|
||||
return {"tag": tag, "shown": False, "error": str(e)}
|
||||
|
||||
def get_events(self) -> list:
|
||||
"""Fetches and clears the event queue from the server."""
|
||||
try:
|
||||
return self._make_request('GET', '/api/events').get("events", [])
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
def wait_for_event(self, event_type: str, timeout: float = 5) -> dict | None:
|
||||
"""Polls for a specific event type."""
|
||||
start = time.time()
|
||||
while time.time() - start < timeout:
|
||||
events = self.get_events()
|
||||
for ev in events:
|
||||
if ev.get("type") == event_type:
|
||||
return ev
|
||||
time.sleep(0.1) # Fast poll
|
||||
return None
|
||||
|
||||
def wait_for_value(self, item: str, expected: Any, timeout: float = 5) -> bool:
|
||||
"""Polls until get_value(item) == expected."""
|
||||
start = time.time()
|
||||
while time.time() - start < timeout:
|
||||
if self.get_value(item) == expected:
|
||||
return True
|
||||
time.sleep(0.1) # Fast poll
|
||||
return False
|
||||
|
||||
def reset_session(self) -> dict | None:
|
||||
"""Simulates clicking the 'Reset Session' button in the GUI."""
|
||||
return self.click("btn_reset")
|
||||
|
||||
def request_confirmation(self, tool_name: str, args: dict) -> Any:
|
||||
"""Asks the user for confirmation via the GUI (blocking call)."""
|
||||
# Using a long timeout as this waits for human input (60 seconds)
|
||||
res = self._make_request('POST', '/api/ask',
|
||||
data={'type': 'tool_approval', 'tool': tool_name, 'args': args},
|
||||
timeout=60.0)
|
||||
return res.get('response')
|
||||
|
||||
331
api_hooks.py
Normal file
331
api_hooks.py
Normal file
@@ -0,0 +1,331 @@
|
||||
from __future__ import annotations
|
||||
import json
|
||||
import threading
|
||||
import uuid
|
||||
from http.server import ThreadingHTTPServer, BaseHTTPRequestHandler
|
||||
from typing import Any
|
||||
import logging
|
||||
import session_logger
|
||||
|
||||
class HookServerInstance(ThreadingHTTPServer):
|
||||
"""Custom HTTPServer that carries a reference to the main App instance."""
|
||||
def __init__(self, server_address: tuple[str, int], RequestHandlerClass: type, app: Any) -> None:
|
||||
super().__init__(server_address, RequestHandlerClass)
|
||||
self.app = app
|
||||
|
||||
class HookHandler(BaseHTTPRequestHandler):
|
||||
"""Handles incoming HTTP requests for the API hooks."""
|
||||
def do_GET(self) -> None:
|
||||
app = self.server.app
|
||||
session_logger.log_api_hook("GET", self.path, "")
|
||||
if self.path == '/status':
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps({'status': 'ok'}).encode('utf-8'))
|
||||
elif self.path == '/api/project':
|
||||
import project_manager
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
flat = project_manager.flat_config(app.project)
|
||||
self.wfile.write(json.dumps({'project': flat}).encode('utf-8'))
|
||||
elif self.path == '/api/session':
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(
|
||||
json.dumps({'session': {'entries': app.disc_entries}}).
|
||||
encode('utf-8'))
|
||||
elif self.path == '/api/performance':
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
metrics = {}
|
||||
if hasattr(app, 'perf_monitor'):
|
||||
metrics = app.perf_monitor.get_metrics()
|
||||
self.wfile.write(json.dumps({'performance': metrics}).encode('utf-8'))
|
||||
elif self.path == '/api/events':
|
||||
# Long-poll or return current event queue
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
events = []
|
||||
if hasattr(app, '_api_event_queue'):
|
||||
with app._api_event_queue_lock:
|
||||
events = list(app._api_event_queue)
|
||||
app._api_event_queue.clear()
|
||||
self.wfile.write(json.dumps({'events': events}).encode('utf-8'))
|
||||
elif self.path == '/api/gui/value':
|
||||
# POST with {"field": "field_tag"} to get value
|
||||
content_length = int(self.headers.get('Content-Length', 0))
|
||||
body = self.rfile.read(content_length)
|
||||
data = json.loads(body.decode('utf-8'))
|
||||
field_tag = data.get("field")
|
||||
print(f"[DEBUG] Hook Server: get_value for {field_tag}")
|
||||
event = threading.Event()
|
||||
result = {"value": None}
|
||||
|
||||
def get_val():
|
||||
try:
|
||||
if field_tag in app._settable_fields:
|
||||
attr = app._settable_fields[field_tag]
|
||||
val = getattr(app, attr, None)
|
||||
print(f"[DEBUG] Hook Server: attr={attr}, val={val}")
|
||||
result["value"] = val
|
||||
else:
|
||||
print(f"[DEBUG] Hook Server: {field_tag} NOT in settable_fields")
|
||||
finally:
|
||||
event.set()
|
||||
with app._pending_gui_tasks_lock:
|
||||
app._pending_gui_tasks.append({
|
||||
"action": "custom_callback",
|
||||
"callback": get_val
|
||||
})
|
||||
if event.wait(timeout=60):
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps(result).encode('utf-8'))
|
||||
else:
|
||||
self.send_response(504)
|
||||
self.end_headers()
|
||||
elif self.path.startswith('/api/gui/value/'):
|
||||
# Generic endpoint to get the value of any settable field
|
||||
field_tag = self.path.split('/')[-1]
|
||||
event = threading.Event()
|
||||
result = {"value": None}
|
||||
|
||||
def get_val():
|
||||
try:
|
||||
if field_tag in app._settable_fields:
|
||||
attr = app._settable_fields[field_tag]
|
||||
result["value"] = getattr(app, attr, None)
|
||||
finally:
|
||||
event.set()
|
||||
with app._pending_gui_tasks_lock:
|
||||
app._pending_gui_tasks.append({
|
||||
"action": "custom_callback",
|
||||
"callback": get_val
|
||||
})
|
||||
if event.wait(timeout=60):
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps(result).encode('utf-8'))
|
||||
else:
|
||||
self.send_response(504)
|
||||
self.end_headers()
|
||||
elif self.path == '/api/gui/mma_status':
|
||||
event = threading.Event()
|
||||
result = {}
|
||||
|
||||
def get_mma():
|
||||
try:
|
||||
result["mma_status"] = getattr(app, "mma_status", "idle")
|
||||
result["ai_status"] = getattr(app, "ai_status", "idle")
|
||||
result["active_tier"] = getattr(app, "active_tier", None)
|
||||
at = getattr(app, "active_track", None)
|
||||
result["active_track"] = at.id if hasattr(at, "id") else at
|
||||
result["active_tickets"] = getattr(app, "active_tickets", [])
|
||||
result["mma_step_mode"] = getattr(app, "mma_step_mode", False)
|
||||
result["pending_tool_approval"] = getattr(app, "_pending_ask_dialog", False)
|
||||
result["pending_mma_step_approval"] = getattr(app, "_pending_mma_approval", None) is not None
|
||||
result["pending_mma_spawn_approval"] = getattr(app, "_pending_mma_spawn", None) is not None
|
||||
# Keep old fields for backward compatibility but add specific ones above
|
||||
result["pending_approval"] = result["pending_mma_step_approval"] or result["pending_tool_approval"]
|
||||
result["pending_spawn"] = result["pending_mma_spawn_approval"]
|
||||
# Added lines for tracks and proposed_tracks
|
||||
result["tracks"] = getattr(app, "tracks", [])
|
||||
result["proposed_tracks"] = getattr(app, "proposed_tracks", [])
|
||||
result["mma_streams"] = getattr(app, "mma_streams", {})
|
||||
finally:
|
||||
event.set()
|
||||
with app._pending_gui_tasks_lock:
|
||||
app._pending_gui_tasks.append({
|
||||
"action": "custom_callback",
|
||||
"callback": get_mma
|
||||
})
|
||||
if event.wait(timeout=60):
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps(result).encode('utf-8'))
|
||||
else:
|
||||
self.send_response(504)
|
||||
self.end_headers()
|
||||
elif self.path == '/api/gui/diagnostics':
|
||||
# Safe way to query multiple states at once via the main thread queue
|
||||
event = threading.Event()
|
||||
result = {}
|
||||
|
||||
def check_all():
|
||||
try:
|
||||
# Generic state check based on App attributes (works for both DPG and ImGui versions)
|
||||
status = getattr(app, "ai_status", "idle")
|
||||
result["thinking"] = status in ["sending...", "running powershell..."]
|
||||
result["live"] = status in ["running powershell...", "fetching url...", "searching web...", "powershell done, awaiting AI..."]
|
||||
result["prior"] = getattr(app, "is_viewing_prior_session", False)
|
||||
finally:
|
||||
event.set()
|
||||
with app._pending_gui_tasks_lock:
|
||||
app._pending_gui_tasks.append({
|
||||
"action": "custom_callback",
|
||||
"callback": check_all
|
||||
})
|
||||
if event.wait(timeout=60):
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps(result).encode('utf-8'))
|
||||
else:
|
||||
self.send_response(504)
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps({'error': 'timeout'}).encode('utf-8'))
|
||||
else:
|
||||
self.send_response(404)
|
||||
self.end_headers()
|
||||
|
||||
def do_POST(self) -> None:
|
||||
app = self.server.app
|
||||
content_length = int(self.headers.get('Content-Length', 0))
|
||||
body = self.rfile.read(content_length)
|
||||
body_str = body.decode('utf-8') if body else ""
|
||||
session_logger.log_api_hook("POST", self.path, body_str)
|
||||
try:
|
||||
data = json.loads(body_str) if body_str else {}
|
||||
if self.path == '/api/project':
|
||||
app.project = data.get('project', app.project)
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(
|
||||
json.dumps({'status': 'updated'}).encode('utf-8'))
|
||||
elif self.path == '/api/session':
|
||||
app.disc_entries = data.get('session', {}).get(
|
||||
'entries', app.disc_entries)
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(
|
||||
json.dumps({'status': 'updated'}).encode('utf-8'))
|
||||
elif self.path == '/api/gui':
|
||||
with app._pending_gui_tasks_lock:
|
||||
app._pending_gui_tasks.append(data)
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(
|
||||
json.dumps({'status': 'queued'}).encode('utf-8'))
|
||||
elif self.path == '/api/ask':
|
||||
request_id = str(uuid.uuid4())
|
||||
event = threading.Event()
|
||||
if not hasattr(app, '_pending_asks'):
|
||||
app._pending_asks = {}
|
||||
if not hasattr(app, '_ask_responses'):
|
||||
app._ask_responses = {}
|
||||
app._pending_asks[request_id] = event
|
||||
# Emit event for test/client discovery
|
||||
with app._api_event_queue_lock:
|
||||
app._api_event_queue.append({
|
||||
"type": "ask_received",
|
||||
"request_id": request_id,
|
||||
"data": data
|
||||
})
|
||||
with app._pending_gui_tasks_lock:
|
||||
app._pending_gui_tasks.append({
|
||||
"type": "ask",
|
||||
"request_id": request_id,
|
||||
"data": data
|
||||
})
|
||||
if event.wait(timeout=60.0):
|
||||
response_data = app._ask_responses.get(request_id)
|
||||
# Clean up response after reading
|
||||
if request_id in app._ask_responses:
|
||||
del app._ask_responses[request_id]
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps({'status': 'ok', 'response': response_data}).encode('utf-8'))
|
||||
else:
|
||||
if request_id in app._pending_asks:
|
||||
del app._pending_asks[request_id]
|
||||
self.send_response(504)
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps({'error': 'timeout'}).encode('utf-8'))
|
||||
elif self.path == '/api/ask/respond':
|
||||
request_id = data.get('request_id')
|
||||
response_data = data.get('response')
|
||||
if request_id and hasattr(app, '_pending_asks') and request_id in app._pending_asks:
|
||||
app._ask_responses[request_id] = response_data
|
||||
event = app._pending_asks[request_id]
|
||||
event.set()
|
||||
# Clean up pending ask entry
|
||||
del app._pending_asks[request_id]
|
||||
# Queue GUI task to clear the dialog
|
||||
with app._pending_gui_tasks_lock:
|
||||
app._pending_gui_tasks.append({
|
||||
"action": "clear_ask",
|
||||
"request_id": request_id
|
||||
})
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps({'status': 'ok'}).encode('utf-8'))
|
||||
else:
|
||||
self.send_response(404)
|
||||
self.end_headers()
|
||||
else:
|
||||
self.send_response(404)
|
||||
self.end_headers()
|
||||
except Exception as e:
|
||||
self.send_response(500)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps({'error': str(e)}).encode('utf-8'))
|
||||
|
||||
def log_message(self, format: str, *args: Any) -> None:
|
||||
logging.info("Hook API: " + format % args)
|
||||
|
||||
class HookServer:
|
||||
def __init__(self, app: Any, port: int = 8999) -> None:
|
||||
self.app = app
|
||||
self.port = port
|
||||
self.server = None
|
||||
self.thread = None
|
||||
|
||||
def start(self) -> None:
|
||||
if self.thread and self.thread.is_alive():
|
||||
return
|
||||
is_gemini_cli = getattr(self.app, 'current_provider', '') == 'gemini_cli'
|
||||
if not getattr(self.app, 'test_hooks_enabled', False) and not is_gemini_cli:
|
||||
return
|
||||
# Ensure the app has the task queue and lock initialized
|
||||
if not hasattr(self.app, '_pending_gui_tasks'):
|
||||
self.app._pending_gui_tasks = []
|
||||
if not hasattr(self.app, '_pending_gui_tasks_lock'):
|
||||
self.app._pending_gui_tasks_lock = threading.Lock()
|
||||
# Initialize ask-related dictionaries
|
||||
if not hasattr(self.app, '_pending_asks'):
|
||||
self.app._pending_asks = {}
|
||||
if not hasattr(self.app, '_ask_responses'):
|
||||
self.app._ask_responses = {}
|
||||
# Event queue for test script subscriptions
|
||||
if not hasattr(self.app, '_api_event_queue'):
|
||||
self.app._api_event_queue = []
|
||||
if not hasattr(self.app, '_api_event_queue_lock'):
|
||||
self.app._api_event_queue_lock = threading.Lock()
|
||||
self.server = HookServerInstance(('127.0.0.1', self.port), HookHandler, self.app)
|
||||
self.thread = threading.Thread(target=self.server.serve_forever, daemon=True)
|
||||
self.thread.start()
|
||||
logging.info(f"Hook server started on port {self.port}")
|
||||
|
||||
def stop(self) -> None:
|
||||
if self.server:
|
||||
self.server.shutdown()
|
||||
self.server.server_close()
|
||||
if self.thread:
|
||||
self.thread.join()
|
||||
logging.info("Hook server stopped")
|
||||
|
||||
|
||||
@@ -0,0 +1,5 @@
|
||||
# Track api_hooks_verification_20260223 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"track_id": "api_hooks_verification_20260223",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-23T17:46:51Z",
|
||||
"updated_at": "2026-02-23T17:46:51Z",
|
||||
"description": "Update conductor to properly utilize the new api hooks for automated testing & verification of track implementation features without the need of user intervention."
|
||||
}
|
||||
19
conductor/archive/api_hooks_verification_20260223/plan.md
Normal file
19
conductor/archive/api_hooks_verification_20260223/plan.md
Normal file
@@ -0,0 +1,19 @@
|
||||
# Implementation Plan: Integrate API Hooks for Automated Track Verification
|
||||
|
||||
## Phase 1: Update Workflow Definition [checkpoint: f17c9e3]
|
||||
- [x] Task: Modify `conductor/workflow.md` to reflect the new automated verification process. [2ec1ecf]
|
||||
- [ ] Sub-task: Update the "Phase Completion Verification and Checkpointing Protocol" section to replace manual verification steps with a description of the automated API hook process.
|
||||
- [ ] Sub-task: Ensure the updated workflow clearly states that the agent will announce the automated test, execute it, and then present the results (success or failure) to the user.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Update Workflow Definition' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: Implement Automated Verification Logic [checkpoint: b575dcd]
|
||||
- [x] Task: Develop the client-side logic for communicating with the API hook server. [f4a9ff8]
|
||||
- [ ] Sub-task: Write failing unit tests for a new `ApiHookClient` that can send requests to the IPC server.
|
||||
- [ ] Sub-task: Implement the `ApiHookClient` to make the tests pass.
|
||||
- [x] Task: Integrate the `ApiHookClient` into the Conductor agent's workflow. [c7c8b89]
|
||||
- [ ] Sub-task: Write failing integration tests to ensure the Conductor's phase completion logic calls the `ApiHookClient`.
|
||||
- [ ] Sub-task: Modify the workflow implementation to use the `ApiHookClient` for verification.
|
||||
- [x] Task: Implement result handling and user feedback. [94b4f38]
|
||||
- [ ] Sub-task: Write failing tests for handling success, failure, and server-unavailable scenarios.
|
||||
- [ ] Sub-task: Implement the logic to log results, present them to the user, and halt the workflow on failure.
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 2: Implement Automated Verification Logic' (Protocol in workflow.md)
|
||||
21
conductor/archive/api_hooks_verification_20260223/spec.md
Normal file
21
conductor/archive/api_hooks_verification_20260223/spec.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# Specification: Integrate API Hooks for Automated Track Verification
|
||||
|
||||
## Overview
|
||||
This track focuses on integrating the existing, previously implemented API hooks (from track `test_hooks_20260223`) into the Conductor workflow. The primary goal is to automate the verification steps within the "Phase Completion Verification and Checkpointing Protocol", reducing the need for manual user intervention and enabling a more streamlined, automated development process.
|
||||
|
||||
## Functional Requirements
|
||||
- **Workflow Integration:** The `workflow.md` document, specifically the "Phase Completion Verification and Checkpointing Protocol," must be updated to replace manual verification steps with automated checks using the API hooks.
|
||||
- **IPC Communication:** The updated workflow will communicate with the application's backend via the established IPC server to trigger verification tasks.
|
||||
- **Result Handling:**
|
||||
- All results from the API hook verifications must be logged for auditing and debugging purposes.
|
||||
- Upon successful verification, the Conductor agent will proceed with the workflow as it currently does after a successful manual check.
|
||||
- Upon failure, the agent will halt, present the failure logs to the user, and await further instructions.
|
||||
- **User Interaction Model:** The system will transition from asking the user to perform a manual test to informing the user that an automated test is running, and then presenting the results.
|
||||
|
||||
## Non-Functional Requirements
|
||||
- **Resilience:** The Conductor agent must handle cases where the API hook server is unavailable or a hook call fails unexpectedly, without crashing or entering an unrecoverable state.
|
||||
- **Transparency:** All interactions with the API hooks must be clearly logged, making the automated process easy to monitor and debug.
|
||||
|
||||
## Out of Scope
|
||||
- **Modifying API Hooks:** This track will not alter the existing API hooks, the IPC server, or the backend implementation. The focus is solely on the client-side integration within the Conductor agent's workflow.
|
||||
- **Changes to Manual Overrides:** Users will retain the ability to manually intervene or bypass automated checks if necessary.
|
||||
5
conductor/archive/api_metrics_20260223/index.md
Normal file
5
conductor/archive/api_metrics_20260223/index.md
Normal file
@@ -0,0 +1,5 @@
|
||||
# Track api_metrics_20260223 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
8
conductor/archive/api_metrics_20260223/metadata.json
Normal file
8
conductor/archive/api_metrics_20260223/metadata.json
Normal file
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"track_id": "api_metrics_20260223",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-23T10:00:00Z",
|
||||
"updated_at": "2026-02-23T10:00:00Z",
|
||||
"description": "Review vendor api usage in regards to conservative context handling"
|
||||
}
|
||||
19
conductor/archive/api_metrics_20260223/plan.md
Normal file
19
conductor/archive/api_metrics_20260223/plan.md
Normal file
@@ -0,0 +1,19 @@
|
||||
# Implementation Plan
|
||||
|
||||
## Phase 1: Metric Extraction and Logic Review [checkpoint: 2668f88]
|
||||
- [x] Task: Extract explicit cache counts and lifecycle states from Gemini SDK
|
||||
- [x] Sub-task: Write Tests
|
||||
- [x] Sub-task: Implement Feature
|
||||
- [x] Task: Review and expose 'history bleed' (token limit proximity) flags
|
||||
- [x] Sub-task: Write Tests
|
||||
- [x] Sub-task: Implement Feature
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: Metric Extraction and Logic Review' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: GUI Telemetry and Plotting [checkpoint: 76582c8]
|
||||
- [x] Task: Implement token budget visualizer (e.g., Progress bars for limits) in Dear PyGui
|
||||
- [x] Sub-task: Write Tests
|
||||
- [x] Sub-task: Implement Feature
|
||||
- [x] Task: Implement active caches data display in Provider/Comms panel
|
||||
- [x] Sub-task: Write Tests
|
||||
- [x] Sub-task: Implement Feature
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: GUI Telemetry and Plotting' (Protocol in workflow.md)
|
||||
22
conductor/archive/api_metrics_20260223/spec.md
Normal file
22
conductor/archive/api_metrics_20260223/spec.md
Normal file
@@ -0,0 +1,22 @@
|
||||
# Specification: Review vendor api usage in regards to conservative context handling
|
||||
|
||||
## Overview
|
||||
This track aims to optimize token efficiency and transparency by reviewing and improving how vendor APIs (Gemini and Anthropic) handle conservative context pruning. The primary focus is on extracting, plotting, and exposing deep metrics to the GUI so developers can intuit how close they are to API limits (e.g., token caps, cache counts, history bleed).
|
||||
|
||||
## Scope
|
||||
- **Gemini Hooks:** Review explicit context caching, cache invalidation, and tools declaration.
|
||||
- **Global Orchestration:** Review global context boundaries within the main prompt lifecycle.
|
||||
- **GUI Metrics:** Expose as much metric data as possible to the user interface (e.g., plotting token usage, visual indicators for when "history bleed" occurs, displaying the number of active caches).
|
||||
|
||||
## Functional Requirements
|
||||
- Implement extensive token and cache metric extraction from both Gemini and Anthropic API responses.
|
||||
- Expose these metrics to the Dear PyGui frontend, potentially utilizing visual plots or progress bars to indicate token budget consumption.
|
||||
- Implement tests to explicitly verify context rules, ensuring history pruning acts conservatively and predictable without data loss.
|
||||
|
||||
## Non-Functional Requirements
|
||||
- Ensure GUI rendering of new plots or dense metrics does not block the main thread.
|
||||
- Adhere to the "Strict State Management" product guideline.
|
||||
|
||||
## Out of Scope
|
||||
- Major feature additions unrelated to context token management or telemetry.
|
||||
- Expanding the AI's agentic capabilities (e.g., new tools).
|
||||
5
conductor/archive/api_vendor_alignment_20260223/index.md
Normal file
5
conductor/archive/api_vendor_alignment_20260223/index.md
Normal file
@@ -0,0 +1,5 @@
|
||||
# Track api_vendor_alignment_20260223 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"track_id": "api_vendor_alignment_20260223",
|
||||
"type": "chore",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-23T12:00:00Z",
|
||||
"updated_at": "2026-02-23T12:00:00Z",
|
||||
"description": "Review project codebase, documentation related to project, and make sure agenti vendor apis are being used as properly stated by offical documentation from google for gemini and anthropic for claude."
|
||||
}
|
||||
56
conductor/archive/api_vendor_alignment_20260223/plan.md
Normal file
56
conductor/archive/api_vendor_alignment_20260223/plan.md
Normal file
@@ -0,0 +1,56 @@
|
||||
# Implementation Plan: API Usage Audit and Alignment
|
||||
|
||||
## Phase 1: Research and Comprehensive Audit [checkpoint: 5ec4283]
|
||||
Identify all points of interaction with AI SDKs and compare them with latest official documentation.
|
||||
|
||||
- [x] Task: List and categorize all AI SDK usage in the project.
|
||||
- [x] Search for all imports of `google.genai` and `anthropic`.
|
||||
- [x] Document specific functions and methods being called.
|
||||
- [x] Task: Research latest official documentation for `google-genai` and `anthropic` Python SDKs.
|
||||
- [x] Verify latest patterns for Client initialization.
|
||||
- [x] Verify latest patterns for Context/Prompt caching.
|
||||
- [x] Verify latest patterns for Tool/Function calling.
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: Research and Comprehensive Audit' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: Gemini (google-genai) Alignment [checkpoint: 842bfc4]
|
||||
Align Gemini integration with documented best practices.
|
||||
|
||||
- [x] Task: Refactor Gemini Client and Chat initialization if needed.
|
||||
- [x] Write Tests
|
||||
- [x] Implement Feature
|
||||
- [x] Task: Optimize Gemini Context Caching.
|
||||
- [x] Write Tests
|
||||
- [x] Implement Feature
|
||||
- [x] Task: Align Gemini Tool Declaration and handling.
|
||||
- [x] Write Tests
|
||||
- [x] Implement Feature
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: Gemini (google-genai) Alignment' (Protocol in workflow.md)
|
||||
|
||||
## Phase 3: Anthropic Alignment [checkpoint: f0eb538]
|
||||
Align Anthropic integration with documented best practices.
|
||||
|
||||
- [x] Task: Refactor Anthropic Client and Message creation if needed.
|
||||
- [x] Write Tests
|
||||
- [x] Implement Feature
|
||||
- [x] Task: Optimize Anthropic Prompt Caching (`cache_control`).
|
||||
- [x] Write Tests
|
||||
- [x] Implement Feature
|
||||
- [x] Task: Align Anthropic Tool Declaration and handling.
|
||||
- [x] Write Tests
|
||||
- [x] Implement Feature
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 3: Anthropic Alignment' (Protocol in workflow.md)
|
||||
|
||||
## Phase 4: History and Token Management [checkpoint: 0f9f235]
|
||||
Ensure accurate token estimation and robust history handling.
|
||||
|
||||
- [x] Task: Review and align token estimation logic for both providers.
|
||||
- [x] Write Tests
|
||||
- [x] Implement Feature
|
||||
- [x] Task: Audit message history truncation and context window management.
|
||||
- [x] Write Tests
|
||||
- [x] Implement Feature
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 4: History and Token Management' (Protocol in workflow.md)
|
||||
|
||||
## Phase 5: Final Validation and Cleanup [checkpoint: e9126b4]
|
||||
- [x] Task: Perform a full test run using `run_tests.py` to ensure 100% pass rate.
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 5: Final Validation and Cleanup' (Protocol in workflow.md)
|
||||
29
conductor/archive/api_vendor_alignment_20260223/spec.md
Normal file
29
conductor/archive/api_vendor_alignment_20260223/spec.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Specification: API Usage Audit and Alignment
|
||||
|
||||
## Overview
|
||||
This track involves a comprehensive audit of the "Manual Slop" codebase to ensure that the integration with Google Gemini (`google-genai`) and Anthropic Claude (`anthropic`) SDKs aligns perfectly with their latest official documentation and best practices. The goal is to identify discrepancies, performance bottlenecks, or deprecated patterns and implement the necessary fixes.
|
||||
|
||||
## Scope
|
||||
- **Target:** Full codebase audit, with primary focus on `ai_client.py`, `mcp_client.py`, and any other modules interacting with AI SDKs.
|
||||
- **Key Areas:**
|
||||
- **Caching Mechanisms:** Verify Gemini context caching and Anthropic prompt caching implementation.
|
||||
- **Tool Calling:** Audit function declarations, parameter schemas, and result handling.
|
||||
- **History & Tokens:** Review message history management, token estimation accuracy, and context window handling.
|
||||
|
||||
## Functional Requirements
|
||||
1. **SDK Audit:** Compare existing code patterns against the latest official Python SDK documentation for Gemini and Anthropic.
|
||||
2. **Feature Validation:**
|
||||
- Ensure `google-genai` usage follows the latest `Client` and `types` patterns.
|
||||
- Ensure `anthropic` usage utilizes `cache_control` correctly for optimal performance.
|
||||
3. **Discrepancy Remediation:** Implement code changes to align the implementation with documented standards.
|
||||
4. **Validation:** Execute tests to ensure that API interactions remain functional and improved.
|
||||
|
||||
## Acceptance Criteria
|
||||
- Full audit completed for all AI SDK interactions.
|
||||
- Identified discrepancies are documented and fixed.
|
||||
- Caching, tool calling, and history management logic are verified against latest SDK standards.
|
||||
- All existing and new tests pass successfully.
|
||||
|
||||
## Out of Scope
|
||||
- Adding support for new AI providers not already in the project.
|
||||
- Major UI refactoring unless directly required by API changes.
|
||||
5
conductor/archive/context_management_20260223/index.md
Normal file
5
conductor/archive/context_management_20260223/index.md
Normal file
@@ -0,0 +1,5 @@
|
||||
# Track context_management_20260223 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"track_id": "context_management_20260223",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-23T10:00:00Z",
|
||||
"updated_at": "2026-02-23T10:00:00Z",
|
||||
"description": "Implement context visualization and memory management improvements"
|
||||
}
|
||||
19
conductor/archive/context_management_20260223/plan.md
Normal file
19
conductor/archive/context_management_20260223/plan.md
Normal file
@@ -0,0 +1,19 @@
|
||||
# Implementation Plan
|
||||
|
||||
## Phase 1: Context Memory and Token Visualization [checkpoint: a88311b]
|
||||
- [x] Task: Implement token usage summary widget e34ff7e
|
||||
- [ ] Sub-task: Write Tests
|
||||
- [ ] Sub-task: Implement Feature
|
||||
- [x] Task: Expose history truncation controls in the Discussion panel 94fe904
|
||||
- [ ] Sub-task: Write Tests
|
||||
- [ ] Sub-task: Implement Feature
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: Context Memory and Token Visualization' (Protocol in workflow.md) a88311b
|
||||
|
||||
## Phase 2: Agent Capability Configuration [checkpoint: 1ac6eb9]
|
||||
- [x] Task: Add UI toggles for available tools per-project 1677d25
|
||||
- [x] Sub-task: Write Tests
|
||||
- [x] Sub-task: Implement Feature
|
||||
- [x] Task: Wire tool toggles to AI provider tool declaration payload 92aa33c
|
||||
- [ ] Sub-task: Write Tests
|
||||
- [ ] Sub-task: Implement Feature
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: Agent Capability Configuration' (Protocol in workflow.md) 1ac6eb9
|
||||
9
conductor/archive/context_management_20260223/spec.md
Normal file
9
conductor/archive/context_management_20260223/spec.md
Normal file
@@ -0,0 +1,9 @@
|
||||
# Specification: Context Visualization and Memory Management
|
||||
|
||||
## Overview
|
||||
This track implements UI improvements and structural changes to Manual Slop to provide explicit visualization of context memory usage and token consumption, fulfilling the "Expert systems level utility" and "Full control" product goals.
|
||||
|
||||
## Core Objectives
|
||||
1. **Token Visualization:** Expose token usage metrics in real-time within the GUI (e.g., in a dedicated metrics panel or augmented Comms panel).
|
||||
2. **Context Memory Management:** Provide tools to manually flush, persist, or truncate history to manage token budgets per-discussion.
|
||||
3. **Agent Capability Toggles:** Expose explicit configuration options for agent capabilities (e.g., toggle MCP tools on/off) from the UI.
|
||||
5
conductor/archive/deepseek_support_20260225/index.md
Normal file
5
conductor/archive/deepseek_support_20260225/index.md
Normal file
@@ -0,0 +1,5 @@
|
||||
# Track deepseek_support_20260225 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"track_id": "deepseek_support_20260225",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-25T00:00:00Z",
|
||||
"updated_at": "2026-02-25T00:00:00Z",
|
||||
"description": "Add support for the deepseek api as a provider."
|
||||
}
|
||||
27
conductor/archive/deepseek_support_20260225/plan.md
Normal file
27
conductor/archive/deepseek_support_20260225/plan.md
Normal file
@@ -0,0 +1,27 @@
|
||||
# Implementation Plan: DeepSeek API Provider Support
|
||||
|
||||
## Phase 1: Infrastructure & Common Logic [checkpoint: 0ec3720]
|
||||
- [x] Task: Initialize MMA Environment `activate_skill mma-orchestrator` 1b3ff23
|
||||
- [x] Task: Update `credentials.toml` schema and configuration logic in `project_manager.py` to support `deepseek` 1b3ff23
|
||||
- [x] Task: Define the `DeepSeekProvider` interface in `ai_client.py` and align with existing provider patterns 1b3ff23
|
||||
- [x] Task: Conductor - User Manual Verification 'Infrastructure & Common Logic' (Protocol in workflow.md) 1b3ff23
|
||||
|
||||
## Phase 2: DeepSeek API Client Implementation
|
||||
- [x] Task: Write failing tests for `DeepSeekProvider` model selection and basic completion
|
||||
- [x] Task: Implement `DeepSeekProvider` using the dedicated SDK
|
||||
- [x] Task: Write failing tests for streaming and tool calling parity in `DeepSeekProvider`
|
||||
- [x] Task: Implement streaming and tool calling logic for DeepSeek models
|
||||
- [x] Task: Conductor - User Manual Verification 'DeepSeek API Client Implementation' (Protocol in workflow.md)
|
||||
|
||||
## Phase 3: Reasoning Traces & Advanced Capabilities
|
||||
- [x] Task: Write failing tests for reasoning trace capture in `DeepSeekProvider` (DeepSeek-R1)
|
||||
- [x] Task: Implement reasoning trace processing and integration with discussion history
|
||||
- [x] Task: Write failing tests for token estimation and cost tracking for DeepSeek models
|
||||
- [x] Task: Implement token usage tracking according to DeepSeek pricing
|
||||
- [x] Task: Conductor - User Manual Verification 'Reasoning Traces & Advanced Capabilities' (Protocol in workflow.md)
|
||||
|
||||
## Phase 4: GUI Integration & Final Verification
|
||||
- [x] Task: Update `gui_2.py` and `theme_2.py` (if necessary) to include DeepSeek in the provider selection UI
|
||||
- [x] Task: Implement automated regression tests for the full DeepSeek lifecycle (prompt, streaming, tool call, reasoning)
|
||||
- [x] Task: Verify overall performance and UI responsiveness with the new provider
|
||||
- [x] Task: Conductor - User Manual Verification 'GUI Integration & Final Verification' (Protocol in workflow.md)
|
||||
31
conductor/archive/deepseek_support_20260225/spec.md
Normal file
31
conductor/archive/deepseek_support_20260225/spec.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# Specification: DeepSeek API Provider Support
|
||||
|
||||
## Overview
|
||||
Implement a new AI provider module to support the DeepSeek API within the Manual Slop application. This integration will leverage a dedicated SDK to provide access to high-performance models (DeepSeek-V3 and DeepSeek-R1) with support for streaming, tool calling, and detailed reasoning traces.
|
||||
|
||||
## Functional Requirements
|
||||
- **Dedicated SDK Integration:** Utilize a DeepSeek-specific Python client for API interactions.
|
||||
- **Model Support:** Initial support for `deepseek-v3` (general performance) and `deepseek-r1` (reasoning).
|
||||
- **Core Features:**
|
||||
- **Streaming:** Support real-time response generation for a better user experience.
|
||||
- **Tool Calling:** Integrate with Manual Slop's existing tool/function execution framework.
|
||||
- **Reasoning Traces:** Capture and display reasoning paths if provided by the model (e.g., DeepSeek-R1).
|
||||
- **Configuration Management:**
|
||||
- Add `[deepseek]` section to `credentials.toml` for `api_key`.
|
||||
- Update `config.toml` to allow selecting DeepSeek as the active provider.
|
||||
|
||||
## Non-Functional Requirements
|
||||
- **Parity:** Maintain consistency with existing Gemini and Anthropic provider implementations in `ai_client.py`.
|
||||
- **Error Handling:** Robust handling of API rate limits and connection issues specific to DeepSeek.
|
||||
- **Observability:** Track token usage and costs according to DeepSeek's pricing model.
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] User can select "DeepSeek" as a provider in the GUI.
|
||||
- [ ] Successful completion of prompts using both DeepSeek-V3 and DeepSeek-R1 models.
|
||||
- [ ] Tool calling works correctly for standard operations (e.g., `read_file`).
|
||||
- [ ] Reasoning traces from R1 are captured and visible in the discussion history.
|
||||
- [ ] Streaming responses function correctly without blocking the GUI.
|
||||
|
||||
## Out of Scope
|
||||
- Support for OpenAI-compatible proxies for DeepSeek in this initial track.
|
||||
- Automated fine-tuning or custom model endpoints.
|
||||
5
conductor/archive/event_driven_metrics_20260223/index.md
Normal file
5
conductor/archive/event_driven_metrics_20260223/index.md
Normal file
@@ -0,0 +1,5 @@
|
||||
# Track event_driven_metrics_20260223 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"track_id": "event_driven_metrics_20260223",
|
||||
"type": "refactor",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-23T15:46:00Z",
|
||||
"updated_at": "2026-02-23T15:46:00Z",
|
||||
"description": "Fix client api metrics to use event driven updates, they shouldn't happen based on ui main thread graphical updates. Only when the program actually does significant client api calls or responses."
|
||||
}
|
||||
28
conductor/archive/event_driven_metrics_20260223/plan.md
Normal file
28
conductor/archive/event_driven_metrics_20260223/plan.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# Implementation Plan: Event-Driven API Metrics Updates
|
||||
|
||||
## Phase 1: Event Infrastructure & Test Setup [checkpoint: 776f4e4]
|
||||
Define the event mechanism and create baseline tests to ensure we don't break data accuracy.
|
||||
|
||||
- [x] Task: Create `tests/test_api_events.py` to verify the new event emission logic in isolation. cd3f3c8
|
||||
- [x] Task: Implement a simple `EventEmitter` or `Signal` class (if not already present) to handle decoupled communication. cd3f3c8
|
||||
- [x] Task: Instrument `ai_client.py` with the event system, adding placeholders for the key lifecycle events. cd3f3c8
|
||||
- [ ] Task: Conductor - User Manual Verification 'Phase 1: Event Infrastructure & Test Setup' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: Client Instrumentation (API Lifecycle) [checkpoint: e24664c]
|
||||
Update the AI client to emit events during actual API interactions.
|
||||
|
||||
- [x] Task: Implement event emission for Gemini and Anthropic request/response cycles in `ai_client.py`. 20ebab5
|
||||
- [x] Task: Implement event emission for tool/function calls and stream processing. 20ebab5
|
||||
- [x] Task: Verify via tests that events carry the correct payload (token counts, session metadata). 20ebab5
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 2: Client Instrumentation (API Lifecycle)' (Protocol in workflow.md) e24664c
|
||||
|
||||
## Phase 3: GUI Integration & Decoupling [checkpoint: 8caebbd]
|
||||
Connect the UI to the event system and remove polling logic.
|
||||
|
||||
- [x] Task: Update `gui.py` to subscribe to API events and trigger metrics UI refreshes only upon event receipt. 2dd6145
|
||||
- [x] Task: Audit the `gui.py` render loop and remove all per-frame metrics calculations or display updates. 2dd6145
|
||||
- [x] Task: Verify that UI performance improves (reduced CPU/frame time) while metrics remain accurate. 2dd6145
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 3: GUI Integration & Decoupling' (Protocol in workflow.md) 8caebbd
|
||||
|
||||
## Phase: Review Fixes
|
||||
- [x] Task: Apply review suggestions 66f728e
|
||||
29
conductor/archive/event_driven_metrics_20260223/spec.md
Normal file
29
conductor/archive/event_driven_metrics_20260223/spec.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Specification: Event-Driven API Metrics Updates
|
||||
|
||||
## Overview
|
||||
Refactor the API metrics update mechanism to be event-driven. Currently, the UI likely polls or recalculates metrics on every frame. This track will implement a signal/event system where `ai_client.py` broadcasts updates only when significant API activities (requests, responses, tool calls, or stream chunks) occur.
|
||||
|
||||
## Functional Requirements
|
||||
- **Event System:** Implement a robust event/signal mechanism (e.g., using a queue or a simple observer pattern) to communicate API lifecycle events.
|
||||
- **Client Instrumentation:** Update `ai_client.py` to emit events at key points:
|
||||
- **Request Start:** When a call is sent to the provider.
|
||||
- **Response Received:** When a full or final response is received.
|
||||
- **Tool Execution:** When a tool call is processed or a result is returned.
|
||||
- **Stream Update:** When a chunk of a streaming response is processed.
|
||||
- **UI Listener:** Update the GUI components (in `gui.py` or associated panels) to subscribe to these events and update metrics displays only when notified.
|
||||
- **Decoupling:** Remove any metrics calculation or display logic that is triggered by the UI's main graphical update loop (per-frame).
|
||||
|
||||
## Non-Functional Requirements
|
||||
- **Efficiency:** Significant reduction in UI main thread CPU usage related to metrics.
|
||||
- **Integrity:** Maintain 100% accuracy of token counts and usage data.
|
||||
- **Responsiveness:** Metrics should update immediately following the corresponding API event.
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] UI metrics for token usage, costs, and session state do NOT recalculate on every frame (can be verified by adding logging to the recalculation logic).
|
||||
- [ ] Metrics update precisely when API calls are made or responses are received.
|
||||
- [ ] Automated tests confirm that events are emitted correctly by the `ai_client`.
|
||||
- [ ] The application remains stable and metrics accuracy is verified against the existing polling implementation.
|
||||
|
||||
## Out of Scope
|
||||
- Adding new metrics or visual components.
|
||||
- Refactoring the core AI logic beyond the event/metrics hook.
|
||||
5
conductor/archive/gemini_cli_headless_20260224/index.md
Normal file
5
conductor/archive/gemini_cli_headless_20260224/index.md
Normal file
@@ -0,0 +1,5 @@
|
||||
# Track gemini_cli_headless_20260224 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"track_id": "gemini_cli_headless_20260224",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-24T23:45:00Z",
|
||||
"updated_at": "2026-02-24T23:45:00Z",
|
||||
"description": "Support gemini cli headless as an alternative to the raw client_api route. So that they user may use their gemini subscription and gemini cli features within manual slop for a more discliplined and visually enriched UX."
|
||||
}
|
||||
26
conductor/archive/gemini_cli_headless_20260224/plan.md
Normal file
26
conductor/archive/gemini_cli_headless_20260224/plan.md
Normal file
@@ -0,0 +1,26 @@
|
||||
# Implementation Plan: Gemini CLI Headless Integration
|
||||
|
||||
## Phase 1: IPC Infrastructure Extension [checkpoint: c0bccce]
|
||||
- [x] Task: Extend `api_hooks.py` to support synchronous "Ask" requests. This involves adding a way for a client to POST a request and wait for a user response from the GUI. (1792107)
|
||||
- [x] Task: Update `api_hook_client.py` with a `request_confirmation(tool_name, args)` method that blocks until the GUI responds. (93f640d)
|
||||
- [x] Task: Create a standalone test script `tests/test_sync_hooks.py` to verify that the CLI-to-GUI communication works as expected. (1792107)
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 1: IPC Infrastructure Extension' (Protocol in workflow.md) (c0bccce)
|
||||
|
||||
## Phase 2: Gemini CLI Adapter & Tool Bridge
|
||||
- [x] Task: Implement `scripts/cli_tool_bridge.py`. This script will be called by the Gemini CLI `BeforeTool` hook and use `ApiHookClient` to talk to the GUI. (211000c)
|
||||
- [x] Task: Implement the `GeminiCliAdapter` in `ai_client.py` (or a new `gemini_cli_adapter.py`). It must handle the `subprocess` lifecycle and parse the `stream-json` output. (b762a80)
|
||||
- [x] Task: Integrate `GeminiCliAdapter` into the main `ai_client.send()` logic. (b762a80)
|
||||
- [x] Task: Write unit tests for the JSON parsing and subprocess management in `GeminiCliAdapter`. (b762a80)
|
||||
- [~] Task: Conductor - User Manual Verification 'Phase 2: Gemini CLI Adapter & Tool Bridge' (Protocol in workflow.md)
|
||||
|
||||
## Phase 3: GUI Integration & Provider Support
|
||||
- [x] Task: Update `gui_2.py` to add "Gemini CLI" to the provider dropdown. (3ce4fa0)
|
||||
- [x] Task: Implement UI elements for "Gemini CLI Session Management" (Login button, session ID display). (3ce4fa0)
|
||||
- [x] Task: Update the `manual_slop.toml` logic to persist Gemini CLI specific settings (e.g., path to CLI, approval mode). (3ce4fa0)
|
||||
- [~] Task: Conductor - User Manual Verification 'Phase 3: GUI Integration & Provider Support' (Protocol in workflow.md)
|
||||
|
||||
## Phase 4: Integration Testing & UX Polish
|
||||
- [x] Task: Create a comprehensive integration test `tests/test_gemini_cli_integration.py` that uses the `live_gui` fixture to simulate a full session. (d187a6c)
|
||||
- [x] Task: Verify tool confirmation flow: CLI Tool -> Bridge -> GUI Modal -> User Approval -> CLI Execution. (d187a6c)
|
||||
- [x] Task: Polish the display of CLI telemetry (tokens/latency) in the GUI diagnostics panel. (1e5b43e)
|
||||
- [x] Task: Conductor - User Manual Verification 'Phase 4: Integration Testing & UX Polish' (Protocol in workflow.md) (1e5b43e)
|
||||
45
conductor/archive/gemini_cli_headless_20260224/spec.md
Normal file
45
conductor/archive/gemini_cli_headless_20260224/spec.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# Specification: Gemini CLI Headless Integration
|
||||
|
||||
## Overview
|
||||
This track integrates the `gemini` CLI as a headless backend provider for Manual Slop. This allows users to leverage their Gemini subscription and the CLI's advanced features (e.g., specialized sub-agents like `codebase_investigator`, structured JSON streaming, and robust session management) directly within the Manual Slop GUI.
|
||||
|
||||
## Goals
|
||||
- Add "Gemini CLI" as a selectable AI provider in Manual Slop.
|
||||
- Support both persistent interactive sessions and one-off task-specific delegation (e.g., running `gemini investigate`).
|
||||
- Implement a secure "BeforeTool" hook to ensure all CLI-initiated tool calls are intercepted and confirmed via the Manual Slop GUI.
|
||||
- Capture and display the CLI's visually enriched output (via JSONL stream) within the existing discussion history.
|
||||
|
||||
## Functional Requirements
|
||||
|
||||
### 1. Gemini CLI Provider Adapter
|
||||
- **Implementation**: Create a `GeminiCliAdapter` class (or extend `ai_client.py`) that wraps the `gemini` CLI subprocess.
|
||||
- **Communication**: Use `--output-format stream-json` to receive real-time updates (text chunks, tool calls, status).
|
||||
- **Session Management**: Support session persistence by tracking the session ID and passing it to subsequent CLI calls.
|
||||
- **Authentication**:
|
||||
- Provide a "Login to Gemini CLI" action in the GUI that triggers `gemini login`.
|
||||
- Support passing an API key via environment variables if configured in `manual_slop.toml`.
|
||||
|
||||
### 2. GUI Intercepted Tool Execution
|
||||
- **Mechanism**: Use the Gemini CLI's `BeforeTool` hook.
|
||||
- **Hook Helper**: A small Python script `scripts/cli_tool_bridge.py` will be registered as the `BeforeTool` hook.
|
||||
- **IPC**: This bridge script will communicate with Manual Slop's `HookServer` (extending it to support synchronous "ask" requests).
|
||||
- **Confirmation**: When a tool is requested, the bridge blocks until the user confirms/denies the action in the GUI, returning the decision as JSON to the CLI.
|
||||
|
||||
### 3. Visual & Telemetry Integration
|
||||
- **Rich Output**: Parse the `stream-json` events to display markdown content and tool status in the GUI.
|
||||
- **Telemetry**: Extract and display token usage and latency metrics provided by the CLI's `result` event.
|
||||
|
||||
## Non-Functional Requirements
|
||||
- **Performance**: The subprocess bridge should introduce minimal latency (<100ms overhead for communication).
|
||||
- **Reliability**: Gracefully handle CLI crashes or timeouts by reporting errors in the GUI and allowing session resets.
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] User can select "Gemini CLI" in the Provider dropdown.
|
||||
- [ ] User can successfully send messages and receive streamed responses from the CLI.
|
||||
- [ ] Any tool call (PowerShell/MCP) initiated by the CLI triggers the standard Manual Slop confirmation modal.
|
||||
- [ ] Tools only execute after user approval; rejection correctly notifies the CLI agent.
|
||||
- [ ] Session history is maintained correctly across multiple turns when using the CLI provider.
|
||||
|
||||
## Out of Scope
|
||||
- Full terminal emulation (ANSI color support) within the GUI; the focus is on structured text and data.
|
||||
- Migrating existing raw `client_api` sessions to CLI sessions.
|
||||
5
conductor/archive/gemini_cli_parity_20260225/index.md
Normal file
5
conductor/archive/gemini_cli_parity_20260225/index.md
Normal file
@@ -0,0 +1,5 @@
|
||||
# Track gemini_cli_parity_20260225 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"track_id": "gemini_cli_parity_20260225",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-25T00:00:00Z",
|
||||
"updated_at": "2026-02-25T00:00:00Z",
|
||||
"description": "Make sure gemini cli behavior and feature set have full parity with regular direct gemini api usage in ai_client.py and elsewhere"
|
||||
}
|
||||
32
conductor/archive/gemini_cli_parity_20260225/plan.md
Normal file
32
conductor/archive/gemini_cli_parity_20260225/plan.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# Implementation Plan: Gemini CLI Parity
|
||||
|
||||
## Phase 1: Infrastructure & Common Logic
|
||||
- [x] Task: Initialize MMA Environment `activate_skill mma-orchestrator`
|
||||
- [x] Task: Audit `gemini_cli_adapter.py` and `ai_client.py` for parity gaps (Findings: missing count_tokens, safety settings, and robust system prompt handling in CLI adapter)
|
||||
- [x] Task: Implement common logging utilities for CLI bridge observability
|
||||
- [x] Task: Conductor - User Manual Verification 'Infrastructure & Common Logic' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: Token Counting & Safety Settings
|
||||
- [x] Task: Write failing tests for token estimation in `GeminiCLIAdapter`
|
||||
- [x] Task: Implement token counting parity in `GeminiCLIAdapter`
|
||||
- [x] Task: Write failing tests for safety setting application in `GeminiCLIAdapter`
|
||||
- [x] Task: Implement safety filter application in `GeminiCLIAdapter`
|
||||
- [x] Task: Conductor - User Manual Verification 'Token Counting & Safety Settings' (Protocol in workflow.md)
|
||||
|
||||
## Phase 3: Tool Calling Parity & System Instructions
|
||||
- [x] Task: Write failing tests for system instruction usage in `GeminiCLIAdapter`
|
||||
- [x] Task: Implement system instruction propagation in `GeminiCLIAdapter`
|
||||
- [x] Task: Write failing tests for tool call/response mapping in `cli_tool_bridge.py`
|
||||
- [x] Task: Synchronize tool call handling between bridge and `ai_client.py`
|
||||
- [x] Task: Conductor - User Manual Verification 'Tool Calling Parity & System Instructions' (Protocol in workflow.md)
|
||||
|
||||
## Phase 4: Final Verification & Performance Diagnostics
|
||||
- [x] Task: Implement automated parity regression tests comparing CLI vs Direct API outputs
|
||||
- [x] Task: Verify bridge latency and error handling robustness
|
||||
- [x] Task: Conductor - User Manual Verification 'Final Verification & Performance Diagnostics' (Protocol in workflow.md)
|
||||
|
||||
## Phase 5: Edge Case Resilience & GUI Integration Tests
|
||||
- [x] Task: Implement tests for context bleed prevention (filtering non-assistant messages)
|
||||
- [x] Task: Implement tests for parameter name resilience (dir_path/file_path aliases)
|
||||
- [x] Task: Implement tests for tool call loop termination and payload persistence
|
||||
- [x] Task: Conductor - User Manual Verification 'Edge Case Resilience' (Protocol in workflow.md)
|
||||
27
conductor/archive/gemini_cli_parity_20260225/spec.md
Normal file
27
conductor/archive/gemini_cli_parity_20260225/spec.md
Normal file
@@ -0,0 +1,27 @@
|
||||
# Specification: Gemini CLI Parity
|
||||
|
||||
## Overview
|
||||
Achieve full functional and behavioral parity between the Gemini CLI integration (`gemini_cli_adapter.py`, `cli_tool_bridge.py`) and the direct Gemini API implementation (`ai_client.py`). This ensures that users leveraging the Gemini CLI as a headless backend provider experience the same level of capability, reliability, and observability as direct API users.
|
||||
|
||||
## Functional Requirements
|
||||
- **Token Estimation Parity:** Implement accurate token counting for both input and output in the Gemini CLI adapter to match the precision of the direct API.
|
||||
- **Safety Settings Parity:** Enable full configuration and enforcement of Gemini safety filters when using the CLI provider.
|
||||
- **Tool Calling Parity:** Synchronize tool definition mapping, call handling, and response processing between the CLI bridge and the direct SDK.
|
||||
- **System Instructions Parity:** Ensure system prompts and instructions are consistently passed and handled across both providers.
|
||||
- **Bridge Robustness:** Enhance the `cli_tool_bridge.py` and adapter to improve latency, error handling (retries), and detailed subprocess observability.
|
||||
|
||||
## Non-Functional Requirements
|
||||
- **Observability:** Detailed logging of CLI subprocess interactions for debugging.
|
||||
- **Performance:** Minimize the overhead introduced by the bridge mechanism.
|
||||
- **Maintainability:** Ensure that future changes to `ai_client.py` can be easily mirrored in the CLI adapter.
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] Token counts for identical prompts match within a 5% margin between CLI and Direct API.
|
||||
- [ ] Safety settings configured in the GUI are correctly applied to CLI sessions.
|
||||
- [ ] Tool calls from the CLI are successfully executed and returned via the bridge without loss of context.
|
||||
- [ ] System instructions are correctly utilized by the model when using the CLI.
|
||||
- [ ] Automated tests verify that responses and tool execution flows are identical for both providers.
|
||||
|
||||
## Out of Scope
|
||||
- Performance optimizations for the `gemini` CLI binary itself.
|
||||
- Support for non-Gemini CLI providers in this track.
|
||||
5
conductor/archive/gui2_feature_parity_20260223/index.md
Normal file
5
conductor/archive/gui2_feature_parity_20260223/index.md
Normal file
@@ -0,0 +1,5 @@
|
||||
# Track gui2_feature_parity_20260223 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"track_id": "gui2_feature_parity_20260223",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-23T20:15:30Z",
|
||||
"updated_at": "2026-02-23T20:15:30Z",
|
||||
"description": "get gui_2 working with latest changes to the project."
|
||||
}
|
||||
82
conductor/archive/gui2_feature_parity_20260223/plan.md
Normal file
82
conductor/archive/gui2_feature_parity_20260223/plan.md
Normal file
@@ -0,0 +1,82 @@
|
||||
# Implementation Plan: GUIv2 Feature Parity
|
||||
|
||||
## Phase 1: Core Architectural Integration [checkpoint: 712d5a8]
|
||||
|
||||
- [x] **Task:** Integrate `events.py` into `gui_2.py`. [24b831c]
|
||||
- [x] Sub-task: Import the `events` module in `gui_2.py`.
|
||||
- [x] Sub-task: Refactor the `ai_client` call in `_do_send` to use the event-driven `send` method.
|
||||
- [x] Sub-task: Create event handlers in `App` class for `request_start`, `response_received`, and `tool_execution`.
|
||||
- [x] Sub-task: Subscribe the handlers to `ai_client.events` upon `App` initialization.
|
||||
- [x] **Task:** Integrate `mcp_client.py` for native file tools. [ece84d4]
|
||||
- [x] Sub-task: Import `mcp_client` in `gui_2.py`.
|
||||
- [x] Sub-task: Add `mcp_client.perf_monitor_callback` to the `App` initialization.
|
||||
- [x] Sub-task: In `ai_client`, ensure the MCP tools are registered and available for the AI to call when `gui_2.py` is the active UI.
|
||||
- [x] **Task:** Write tests for new core integrations. [ece84d4]
|
||||
- [x] Sub-task: Create `tests/test_gui2_events.py` to verify that `gui_2.py` correctly handles AI lifecycle events.
|
||||
- [x] Sub-task: Create `tests/test_gui2_mcp.py` to verify that the AI can use MCP tools through `gui_2.py`.
|
||||
- [x] **Task:** Conductor - User Manual Verification 'Core Architectural Integration' (Protocol in workflow.md)
|
||||
|
||||
## Phase 2: Major Feature Implementation
|
||||
|
||||
- [x] **Task:** Port the API Hooks System. [merged]
|
||||
- [x] Sub-task: Import `api_hooks` in `gui_2.py`.
|
||||
- [x] Sub-task: Instantiate `HookServer` in the `App` class.
|
||||
- [x] Sub-task: Implement the logic to start the server based on a CLI flag (e.g., `--enable-test-hooks`).
|
||||
- [x] Sub-task: Implement the queue and lock for pending GUI tasks from the hook server, similar to `gui.py`.
|
||||
- [x] Sub-task: Add a main loop task to process the GUI task queue.
|
||||
- [x] **Task:** Port the Performance & Diagnostics feature. [merged]
|
||||
- [x] Sub-task: Import `PerformanceMonitor` in `gui_2.py`.
|
||||
- [x] Sub-task: Instantiate `PerformanceMonitor` in the `App` class.
|
||||
- [x] Sub-task: Create a new "Diagnostics" window in `gui_2.py`.
|
||||
- [x] Sub-task: Add UI elements (plots, labels) to the Diagnostics window to display FPS, CPU, frame time, etc.
|
||||
- [x] Sub-task: Add a throttled update mechanism in the main loop to refresh diagnostics data.
|
||||
- [x] **Task:** Implement the Prior Session Viewer. [merged]
|
||||
- [x] Sub-task: Add a "Load Prior Session" button to the UI.
|
||||
- [x] Sub-task: Implement the file dialog logic to select a `.log` file.
|
||||
- [x] Sub-task: Implement the logic to parse the log file and populate the comms history view.
|
||||
- [x] Sub-task: Implement the "tinted" theme application when in viewing mode and a way to exit this mode.
|
||||
- [x] **Task:** Write tests for major features.
|
||||
- [x] Sub-task: Create `tests/test_gui2_api_hooks.py` to test the hook server integration.
|
||||
- [x] Sub-task: Create `tests/test_gui2_diagnostics.py` to verify the diagnostics panel displays data.
|
||||
- [x] **Task:** Conductor - User Manual Verification 'Major Feature Implementation' (Protocol in workflow.md)
|
||||
|
||||
## Phase 3: UI/UX Refinement [checkpoint: cc5074e]
|
||||
|
||||
- [x] **Task:** Refactor UI to a "Hub" based layout. [ddb53b2]
|
||||
- [x] Sub-task: Analyze the docking layout of `gui.py`.
|
||||
- [x] Sub-task: Create wrapper windows for "Context Hub", "AI Settings Hub", "Discussion Hub", and "Operations Hub" in `gui_2.py`.
|
||||
- [x] Sub-task: Move existing windows into their respective Hubs using the `imgui-bundle` docking API.
|
||||
- [x] Sub-task: Ensure the default layout is saved to and loaded from `manualslop_layout.ini`.
|
||||
- [x] **Task:** Add Agent Capability Toggles to the UI. [merged]
|
||||
- [x] Sub-task: In the "Projects" or a new "Agent" panel, add checkboxes for each agent tool (e.g., `run_powershell`, `read_file`).
|
||||
- [x] Sub-task: Ensure these UI toggles are saved to the project\'s `.toml` file.
|
||||
- [x] Sub-task: Ensure `ai_client` respects these settings when determining which tools are available to the AI.
|
||||
- [x] **Task:** Full Theme Integration. [merged]
|
||||
- [x] Sub-task: Review all newly added windows and controls.
|
||||
- [x] Sub-task: Ensure that colors, fonts, and scaling from `theme_2.py` are correctly applied everywhere.
|
||||
- [x] Sub-task: Test theme switching to confirm all elements update correctly.
|
||||
- [x] **Task:** Write tests for UI/UX changes. [ddb53b2]
|
||||
- [x] Sub-task: Create `tests/test_gui2_layout.py` to verify the hub structure is created.
|
||||
- [x] Sub-task: Add tests to verify agent capability toggles are respected.
|
||||
- [x] **Task:** Conductor - User Manual Verification 'UI/UX Refinement' (Protocol in workflow.md)
|
||||
|
||||
## Phase 4: Finalization and Verification
|
||||
|
||||
- [x] **Task:** Conduct full manual testing against `spec.md` Acceptance Criteria. (Note: Some UI display issues for text panels persist and will be addressed in a future track.)
|
||||
- [x] Sub-task: Verify AC1: `gui_2.py` launches.
|
||||
- [x] Sub-task: Verify AC2: Hub layout is correct.
|
||||
- [x] Sub-task: Verify AC3: Diagnostics panel works.
|
||||
- [x] Sub-task: Verify AC4: API hooks server runs.
|
||||
- [x] Sub-task: Verify AC5: MCP tools are usable by AI.
|
||||
- [x] Sub-task: Verify AC6: Prior Session Viewer works.
|
||||
- [x] Sub-task: Verify AC7: Theming is consistent.
|
||||
- [x] **Task:** Run the full project test suite.
|
||||
- [x] Sub-task: Execute `uv run run_tests.py` (or equivalent).
|
||||
- [x] Sub-task: Ensure all existing and new tests pass.
|
||||
- [x] **Task:** Code Cleanup and Refactoring.
|
||||
- [x] Sub-task: Remove any dead code or temporary debug statements.
|
||||
- [x] Sub-task: Ensure code follows project style guides.
|
||||
- [x] **Task:** Conductor - User Manual Verification 'Finalization and Verification' (Protocol in workflow.md)
|
||||
|
||||
---
|
||||
**Note:** This track is being closed. Remaining UI display issues for text panels in the comms and tool call history will be addressed in a subsequent track. Please see the project's issue tracker for details on the new track.
|
||||
45
conductor/archive/gui2_feature_parity_20260223/spec.md
Normal file
45
conductor/archive/gui2_feature_parity_20260223/spec.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# Specification: GUIv2 Feature Parity
|
||||
|
||||
## 1. Overview
|
||||
|
||||
This track aims to bring `gui_2.py` (the `imgui-bundle` based UI) to feature parity with the existing `gui.py` (the `dearpygui` based UI). This involves porting several major systems and features to ensure `gui_2.py` can serve as a viable replacement and support the latest project capabilities like automated testing and advanced diagnostics.
|
||||
|
||||
## 2. Functional Requirements
|
||||
|
||||
### FR1: Port Core Architectural Systems
|
||||
- **FR1.1: Event-Driven Architecture:** `gui_2.py` MUST be refactored to use the `events.py` module for handling API lifecycle events, decoupling the UI from the AI client.
|
||||
- **FR1.2: MCP File Tools Integration:** `gui_2.py` MUST integrate and use `mcp_client.py` to provide the AI with native, sandboxed file system capabilities (read, list, search).
|
||||
|
||||
### FR2: Port Major Features
|
||||
- **FR2.1: API Hooks System:** The full API hooks system, including `api_hooks.py` and `api_hook_client.py`, MUST be integrated into `gui_2.py`. This will enable external test automation and state inspection.
|
||||
- **FR2.2: Performance & Diagnostics:** The performance monitoring capabilities from `performance_monitor.py` MUST be integrated. A new "Diagnostics" panel, mirroring the one in `gui.py`, MUST be created to display real-time metrics (FPS, CPU, Frame Time, etc.).
|
||||
- **FR2.3: Prior Session Viewer:** The functionality to load and view previous session logs (`.log` files from the `/logs` directory) MUST be implemented, including the distinctive "tinted" UI theme when viewing a prior session.
|
||||
|
||||
### FR3: UI/UX Alignment
|
||||
- **FR3.1: 'Hub' UI Layout:** The windowing layout of `gui_2.py` MUST be refactored to match the "Hub" paradigm of `gui.py`. This includes creating:
|
||||
- `Context Hub`
|
||||
- `AI Settings Hub`
|
||||
- `Discussion Hub`
|
||||
- `Operations Hub`
|
||||
- **FR3.2: Agent Capability Toggles:** The UI MUST include checkboxes or similar controls to allow the user to enable or disable the AI's agent-level tools (e.g., `run_powershell`, `read_file`).
|
||||
- **FR3.3: Full Theme Integration:** All new UI components, windows, and controls MUST correctly apply and respond to the application's theming system (`theme_2.py`).
|
||||
|
||||
## 3. Non-Functional Requirements
|
||||
|
||||
- **NFR1: Stability:** The application must remain stable and responsive during and after the feature porting.
|
||||
- **NFR2: Maintainability:** The new code should follow existing project conventions and be well-structured to ensure maintainability.
|
||||
|
||||
## 4. Acceptance Criteria
|
||||
|
||||
- **AC1:** `gui_2.py` successfully launches without errors.
|
||||
- **AC2:** The "Hub" layout is present and organizes the UI elements as specified.
|
||||
- **AC3:** The Diagnostics panel is present and displays updating performance metrics.
|
||||
- **AC4:** The API hooks server starts and is reachable when `gui_2.py` is run with the appropriate flag.
|
||||
- **AC5:** The AI can successfully use file system tools provided by `mcp_client.py`.
|
||||
- **AC6:** The "Prior Session Viewer" can successfully load and display a log file.
|
||||
- **AC7:** All new UI elements correctly reflect the selected theme.
|
||||
|
||||
## 5. Out of Scope
|
||||
|
||||
- Deprecating or removing `gui.py`. Both will coexist for now.
|
||||
- Any new features not already present in `gui.py`. This is strictly a porting and alignment task.
|
||||
5
conductor/archive/gui2_parity_20260224/index.md
Normal file
5
conductor/archive/gui2_parity_20260224/index.md
Normal file
@@ -0,0 +1,5 @@
|
||||
# Track gui2_parity_20260224 Context
|
||||
|
||||
- [Specification](./spec.md)
|
||||
- [Implementation Plan](./plan.md)
|
||||
- [Metadata](./metadata.json)
|
||||
8
conductor/archive/gui2_parity_20260224/metadata.json
Normal file
8
conductor/archive/gui2_parity_20260224/metadata.json
Normal file
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"track_id": "gui2_parity_20260224",
|
||||
"type": "feature",
|
||||
"status": "new",
|
||||
"created_at": "2026-02-24T18:38:00Z",
|
||||
"updated_at": "2026-02-24T18:38:00Z",
|
||||
"description": "Investigate differences left between gui.py and gui_2.py. Needs to reach full parity, so we can sunset guy.py"
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user