TF_Curses Project on Github

In building the curses frontend, a lot or rescources were used from all over the place. It is hard to get a good starting grip on how to integrate curses into a larger project. So im going to start with a list of the pages i used. Then try to follow up with my own example.
http://www.tuxradar.com/content/code-project-build-ncurses-ui-python
http://doc.uh.cz/Python/HowTo/curses/node8.html
https://android.googlesource.com/toolchain/benchmark/+/master/python/src/Doc/howto/curses.rst
https://askubuntu.com/questions/98181/how-to-get-screen-size-through-python-curses
https://web.archive.org/web/20111114120248/http://ironalbatross.net/wiki/index.php5?title=Python_Curses#Getting_screen_dimensions
http://stackoverflow.com/questions/5161552/python-curses-handling-window-terminal-resize
http://www.andrewnoske.com/wiki/Unix_-_ncurses_ui
https://docs.python.org/3/howto/curses.html
https://docs.python.org/2.0/lib/curses-window-objects.html
https://linux.die.net/man/3/ncurses

Setting up redis with docker

So to transcend a beginners level of tensorflow and machine learning, a database is needed. A database is required when a model after having been trained is required to now sit as a server, and server its model. And since that sentence was silly, a model cant serve itself. Another program needs to be developed to serve the model. Thankfully that’s the easy part, which we will get to.
STILL the hard part is creating and using the database in such a way that it is deployable with the model server, and the 2 kind of work as a team. So that means we are back to talking about Docker Clouds. A Docker Cloud to the best of my understanding is more metaphorically a Dock, and the other dockers that you have (ie. TF_SERVER, REDIS_SERVER, TBOARD_SERVER) and so on. Those are now SHIPS. SO… if you have your SHIPS… at the DOCK… it’s best to now think about putting that DOCK in the a harbor. There are 2 options here the deployment needs. its needs a local test option. And a scaled AWS version.
The only difference would be that the scaled version would need another pieces which would be WAREHOUSES near the docks… but that’s why we are going to AWS harbor. Because the REDIS/database servers will need their own cloud to deploy and manage their data by user and master. So then scaling each other pieces { SHIPS: [MODEL_SERVER, TBOARD_SERVER]} accordingly can by dynamically scripted.
… and thats why we need a database.

Goals:
(1) Create a redis docker. Use it to store { key: “value”} and then retrieve it.
(2) Script that to run when beginning to train a new model.
(3) Make another script that can get that dict, rev_dict result from the redis server, which is a moving part of the TF_Server. from another docker, that will be the TF_Server test.

Method to complete Goals:
~ class start_new_model(object):
~    def make_new_redis_server(docker.container):
~        return server_container
~    def prepare_data(file_name, server_container):
~         with open(file_name, ‘r’, ‘UTF-8’) as text:
~                sever_container.store(data, index)
~         return dict, rev_dict

Bonus:
(1) Save the TBoard results into a different table in the database and the retrive the data from the database to start the TBoard.
(2) Start a Nother-nother docker that will use the data from the redis docker to start the TBoard docker.

Subscribe for a complete step by step tutorial here on DummyScript.com.

Start at the beginning.

Start at the beginning. This website will serve as my own guides and understanding tools as I continue in the world of programming and computer science. I would like to also extend a “tincan” for all assumed reasons and for the opportunity to provide better, and more focused content on understanding tools for others, in the future.

The first step in the beginning of every new project is creating a clean environment.  Even the reasons for this are so vast that it is difficult to find a place and scope to list them. Nonetheless, the need of a stable and common working environment is key to every step of a new project. Sales pitch over on why this is necessary. On to the first lesson. Create a New_Computer.

Goals of New_Computer: a common stable machine to work from and create tutorials on and about. Sub-Goals, this machine needs be able to run on the hardware of a raspberry pi, AND be distributeable to AWS.

Method to complete Goals: Docker. Docker as it expands its business and is in a growth stage, is a great place to use free services, connected with other free services, to, by the end of this series; create a personal computing work space online, that is of a Future Grade, and is accessible from any device the future sends at us.

Steps to complete goal:
(1) create a dockerhub account.
(2) create an AWS account.
(3) create a github account.
(4) create github repository of the New_Computer necessary files and specifications.
(5) create webhook in github for dockerhub.
(6) create webhook in AWS for dockerhub.
(7) create a Start_New_Computer installable program for your device.
(8) Start the New_Computer and enter a terminal for completion of this Goal.
Bonus Goal w/raspi3:
(9) locally deploy the docker and enter a terminal for completion of this Goal.

Subscribe for a complete step by step tutorial here on DummyScript.com.