Skip to content
Snippets Groups Projects
Commit 827f5d05 authored by Hammam Abdelwahab's avatar Hammam Abdelwahab
Browse files

Update README.md

parent 31f0401f
No related branches found
No related tags found
No related merge requests found
# Tenosrflow serving metrics with Grafana # Monitor Machine Learning Models Deployed on Tensorflow Serving
To run Grafana, Prometheus, and Tensorflow serving: This is a demo showing possible ways to monitor machine learning models deployed on Tensorflow Serving using prometheus, Loki & Grafana.
## How to run it
After cloning the repo, on the command line type
`docker-compose up` `docker-compose up`
## Model Registering
There are different techniques with which you can register your trained model for deployment. For the purpose of simplicity, we're registering a pre-trained half plus two model which is already included in this repo.
## Prediction Example
To pbtain the predection, input instances are passed as follows
`!curl -X POST -H "Content-Type: application/json" -d '{"instances": [1.0, 2.0, 5.0]}' http://10.5.0.6/prediction`
The expected output should look something like this
`{ "predictions": [2.5, 3.0, 4.5] }`
More info about Tensorlow Serving (docker) is found (here)[https://www.tensorflow.org/tfx/serving/docker]
## IPs & Ports to use for setup in Grafana
To setup Grafana, use the following url,
Grafana: http://localhost:3000 (username: admin, password: admin)
To setup Prometheus and Loki as data sources, use the following urls
Prometheus: http://10.5.0.7:9090
Loki: http://10.5.0.9:3100
- Grafana: http://localhost:3000
- Prometheus: http://localhost:9090
- Predicrion example (for halp plus two model): curl -d '{"instances": [1.0, 2.0, 5.0]}' -X POST http://localhost:8501/v1/models/half_plus_two:predict
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment