Hello Sangmin Lee,
Thank you for posting your question in the Microsoft Q&A forum.
Azure AI Foundry’s prompt flows occasionally encountering prolonged execution times stuck indefinitely in the "Running" state suggests an underlying systemic issue rather than an isolated code problem. Since these flows consist of just one LLM call and a Python script, and since they previously worked reliably, the degradation points to potential throttling, resource allocation constraints, or backend service instability.
First, check Azure’s status dashboard for regional outages or degraded performance in AI services like OpenAI or Azure Machine Learning. Microsoft occasionally imposes rate limits on LLM APIs, which could cause queuing delays. If the status page shows no issues, review your application logs in Azure Monitor to identify failed or throttled requests.
Next, validate your Python script’s error handling. If the LLM response is malformed or times out, the script might hang while waiting for a resolution. Implement explicit timeouts and retry logic to prevent indefinite stalls. Additionally, test with simplified flows remove the Python step temporarily to isolate whether delays originate from the LLM or the script.
If the above answer helped, please do not forget to "Accept Answer" as this may help other community members to refer the info if facing a similar issue. Your contribution to the Microsoft Q&A community is highly appreciated.