AgnosPCB API

Created to easily embed the AgnosPCB solution to your existing AOI system. If you already have a good camera want to integrate the AgnosPCB neural netwrok powered AOI service to your system, just upload an image of your PCB/ Panel to our cloud server using this API and get the inspection result within seconds.

Inside the zip file you will find two Python code examples and four photos of two sets of PCBAs:

 

example 1.py

It will upload the photo of the PCBA “sample1_REFERENCE.jpg” and compare it with  the “sample1_UUI.jpg” photo. Returning the images aligned, a file containing the fault’s detected coordinates and a result image, indicating inside a box, where the faults are.

 

 

example 2.py

It does the same, but applying a crop to a certain area previously set.

 

Specifications:

For Python 3.9  (if you need this API for another Python version, contact us)

Module Requirements:

  • PILLOW   
  • OPENCV   
  • REQUESTS  
  • NUMPY

 

 

————————————————————————

AgnosPCB API Inference method documentation (extracted from example 1.py)

 

# AgnosPCB_API.inference(user,password,reference,uui,x1_crop,y1_crop,x2_crop,y2_crop)
# INPUT DEFINITION:
#   user: AgnosPCB user
#   password: AgnosPCB password
# Contact us at info@agnospcb.com if you need user credentials.
#   reference: Reference board (golden sample board), This parameter could be a path-to-file string or a PIL image object
#   uui : Unit under inspection (board to inspect), This parameter could be a path-to-file string or a PIL image object
#   x1_crop [optional] : float from 0.0 to 1.0 representing the left image fraction of crop (example 0.1 means crop starts at 10% image width)
#   y1_crop [opcional] : float from 0.0 to 1.0 representing the top image fraction of crop (example 0.1)
#   x2_crop [optional] : float from 0.0 to 1.0 representing the right image fraction of crop (example 0.9 means crop ends at 90% image width)
#   y2_crop [opcional] : float from 0.0 to 1.0 representing the bottom image fraction of crop (example 0.9)

# OUTPUT DEFINITION
output : JSON , reference_aligned
# output JSON structure:
# {
#  “success”: True,
#  “data”:{
#          “codename”:””,     // algorithm version
#          “credit”:””,       // remaining inspection credits
#          “node_alias”:””,   // processing node
#          “time”:””,         // inference time
#          “api_version”:””,
#          “inference_crop”:{ // Inference crop over the UUI image
#                            “x1_crop”: float from 0.0 to 1.0 representing the left image fraction of crop. To convert this value to pixels simply multiply by image width
#                            “y1_crop”: float from 0.0 to 1.0 representing the top image fraction of crop. To convert this value to pixels simply multiply by image height
#                            “x2_crop”: float from 0.0 to 1.0 representing the right image fraction of crop. To convert this value to pixels simply multiply by image width
#                            “y2_crop”: float from 0.0 to 1.0 representing the bottom image fraction of crop. To convert this value to pixels simply multiply by image height
#                           }
#          },
#  “errors_data”: [{
#                    “X”:””,    // X coordinate (image pixels units) from fault center
#                    “Y”:””,    // X coordinate (image pixels units) from fault center
#                    “label”:””,  // Error number
#                    “bbox”:{               // Bounding box (in pixel coordinates) fault detected
#                            “x_min”:””,
#                            “y_min”:””,
#                            “x_max”:””,
#                            “y_max”:””
#                           }
#                    {}
#                    …
#                    ]
# }
#
# output reference aligned : PIL image object with reference image aligned with UUI image (for inspection analysis)

——————————————————————-