url
stringlengths 59
59
| repository_url
stringclasses 1
value | labels_url
stringlengths 73
73
| comments_url
stringlengths 68
68
| events_url
stringlengths 66
66
| html_url
stringlengths 49
49
| id
int64 782M
1.89B
| node_id
stringlengths 18
24
| number
int64 4.97k
9.98k
| title
stringlengths 2
306
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | active_lock_reason
null | body
stringlengths 0
63.6k
⌀ | reactions
dict | timeline_url
stringlengths 68
68
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 0
classes | pull_request
dict | is_pull_request
bool 1
class |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/kubeflow/pipelines/issues/5137 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5137/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5137/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5137/events | https://github.com/kubeflow/pipelines/issues/5137 | 808,427,374 | MDU6SXNzdWU4MDg0MjczNzQ= | 5,137 | Problems upgrading to TFX 0.27.0 | {
"login": "rafaascensao",
"id": 17235468,
"node_id": "MDQ6VXNlcjE3MjM1NDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/17235468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafaascensao",
"html_url": "https://github.com/rafaascensao",
"followers_url": "https://api.github.com/users/rafaascensao/followers",
"following_url": "https://api.github.com/users/rafaascensao/following{/other_user}",
"gists_url": "https://api.github.com/users/rafaascensao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafaascensao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafaascensao/subscriptions",
"organizations_url": "https://api.github.com/users/rafaascensao/orgs",
"repos_url": "https://api.github.com/users/rafaascensao/repos",
"events_url": "https://api.github.com/users/rafaascensao/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafaascensao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] | closed | false | {
"login": "neuromage",
"id": 206520,
"node_id": "MDQ6VXNlcjIwNjUyMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/206520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neuromage",
"html_url": "https://github.com/neuromage",
"followers_url": "https://api.github.com/users/neuromage/followers",
"following_url": "https://api.github.com/users/neuromage/following{/other_user}",
"gists_url": "https://api.github.com/users/neuromage/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neuromage/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neuromage/subscriptions",
"organizations_url": "https://api.github.com/users/neuromage/orgs",
"repos_url": "https://api.github.com/users/neuromage/repos",
"events_url": "https://api.github.com/users/neuromage/events{/privacy}",
"received_events_url": "https://api.github.com/users/neuromage/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "neuromage",
"id": 206520,
"node_id": "MDQ6VXNlcjIwNjUyMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/206520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neuromage",
"html_url": "https://github.com/neuromage",
"followers_url": "https://api.github.com/users/neuromage/followers",
"following_url": "https://api.github.com/users/neuromage/following{/other_user}",
"gists_url": "https://api.github.com/users/neuromage/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neuromage/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neuromage/subscriptions",
"organizations_url": "https://api.github.com/users/neuromage/orgs",
"repos_url": "https://api.github.com/users/neuromage/repos",
"events_url": "https://api.github.com/users/neuromage/events{/privacy}",
"received_events_url": "https://api.github.com/users/neuromage/received_events",
"type": "User",
"site_admin": false
},
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
},
{
"login": "numerology",
"id": 9604122,
"node_id": "MDQ6VXNlcjk2MDQxMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9604122?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/numerology",
"html_url": "https://github.com/numerology",
"followers_url": "https://api.github.com/users/numerology/followers",
"following_url": "https://api.github.com/users/numerology/following{/other_user}",
"gists_url": "https://api.github.com/users/numerology/gists{/gist_id}",
"starred_url": "https://api.github.com/users/numerology/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/numerology/subscriptions",
"organizations_url": "https://api.github.com/users/numerology/orgs",
"repos_url": "https://api.github.com/users/numerology/repos",
"events_url": "https://api.github.com/users/numerology/events{/privacy}",
"received_events_url": "https://api.github.com/users/numerology/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@rafaascensao Which version of TFX are you using? the tfx.dsl package was introduced in TFX 0.25.0 I think, and if you are using an older version this error would pop up. Try running updating TFX to latest by running `python3 -m pip install -U tfx`. Anyway, this is a package import issue and most likely has nothing to do with KFP.",
"@ConverJens I am talking about the sample pipeline of TFX Taxi that comes with the deployment (https://github.com/kubeflow/pipelines/blob/master/samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py). If I see the code, it uses a TFX image with the version 0.22.0. ",
"@rafaascensao Sorry! In that case, the image needs to be updated to tfx>=0.25.0. Ping @Bobgy, do you know who maintains this?",
"/assign @numerology @chensun @neuromage \n\nThis seems a P0.",
"Dont know why the linking dont work, but PR for fix: https://github.com/kubeflow/pipelines/pull/5165",
"Thanks @NikeNano !\r\n\r\n@Bobgy IIRC our integration test should cover this sample, do we know why this is not captured?",
"I have the same concern, @Ark-kun do you know why?\r\n\r\n/reopen\r\nLet's make sure to also address why it slipped integration tests.\r\n\r\nEDIT: figured out root cause in https://github.com/kubeflow/pipelines/issues/5178",
"@Bobgy: Reopened this issue.\n\n<details>\n\nIn response to [this](https://github.com/kubeflow/pipelines/issues/5137#issuecomment-784242034):\n\n>I have the same concern, @Ark-kun do you know why?\n>\n>/reopen\n>Let's make sure to also address why it slipped integration tests.\n\n\nInstructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.\n</details>",
"@Bobgy: Reopened this issue.\n\n<details>\n\nIn response to [this](https://github.com/kubeflow/pipelines/issues/5137#issuecomment-784242034):\n\n>I have the same concern, @Ark-kun do you know why?\n>\n>/reopen\n>Let's make sure to also address why it slipped integration tests.\n\n\nInstructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.\n</details>",
"I tried to upgrade some other TFX related deps, however, I'm getting the following errors in various places in parameterized_tfx_sample:\r\n* when clicking visualizations on `examplevalidator`, I'm seeing error: `NotFoundError: The specified path gs://gongyuan-test/tfx_taxi_simple/6f9ad66d-2897-4e11-8c2c-e3c614920f35/ExampleValidator/anomalies/32/anomalies.pbtxt was not found.` EDIT: fix sent in https://github.com/kubeflow/pipelines/pull/5186\r\n* when clicking visualizations on `evaluator`, the visualization js simply crashes without any clear error message. I tried to take a look at browser console, but it only shows. EDIT: workarounded in https://github.com/kubeflow/pipelines/pull/5191\r\n```\r\nUncaught Error: Script error for: tensorflow_model_analysis\r\nhttp://requirejs.org/docs/errors.html#scripterror\r\n at C (require.min.js:8)\r\n at HTMLScriptElement.onScriptError (require.min.js:29)\r\n```",
"Another problem:\r\nKFP depends on absl-py: https://github.com/kubeflow/pipelines/blob/a394f8cdf34de71f305ca1a49220a859c81ea502/sdk/python/requirements.in#L26\r\n\r\nconflicts with\r\n\r\ntfx==0.27.0\r\n\r\n\r\n```\r\nERROR: Cannot install absl-py<1 and >=0.11.0 and tfx==0.27.0 because these package versions have conflicting dependencies.\r\n\r\nThe conflict is caused by:\r\n The user requested absl-py<1 and >=0.11.0\r\n tfx 0.27.0 depends on absl-py<0.11 and >=0.9\r\n```\r\n\r\nEDIT: I'm going to use absl that works with TFX, let's see whether it passes presubmit & postsubmit tests.\r\nhttps://github.com/kubeflow/pipelines/pull/5179",
"The third problem I found, https://github.com/kubeflow/pipelines/tree/master/samples/core/tfx-oss sample is still mentioning very old version of TFX. Can someone verify and fix? but this does not block current release.",
"> Another problem:\r\n> KFP depends on absl-py:\r\n> \r\n> https://github.com/kubeflow/pipelines/blob/a394f8cdf34de71f305ca1a49220a859c81ea502/sdk/python/requirements.in#L26\r\n> \r\n> conflicts with\r\n> \r\n> tfx==0.27.0\r\n> \r\n> ```\r\n> ERROR: Cannot install absl-py<1 and >=0.11.0 and tfx==0.27.0 because these package versions have conflicting dependencies.\r\n> \r\n> The conflict is caused by:\r\n> The user requested absl-py<1 and >=0.11.0\r\n> tfx 0.27.0 depends on absl-py<0.11 and >=0.9\r\n> ```\r\n> \r\n> EDIT: I'm going to use absl that works with TFX, let's see whether it passes presubmit & postsubmit tests.\r\n> #5179\r\n\r\nThis one actually conflicts with KFP setup.py\r\nhttps://github.com/kubeflow/pipelines/blob/cd55a55c6229e9e734a35c166c002728b8fa4a72/sdk/python/setup.py#L23\r\nIt's an easy fix that we should do anyway, but I want to understand where `requirements.txt` is used in this case? A KFP SDK user doesn't need to install from `requirements.txt`.",
">I want to understand where requirements.txt is used in this case? A KFP SDK user doesn't need to install from requirements.txt.\r\n\r\nIt's used in automated tests and documentation generation. Some users might be using it as well. `requirements.txt` provides a known good dependency snapshot while `requirements.in` or `setup.py` specify loose top layer dependencies.\r\nSee the official documentation: https://packaging.python.org/discussions/install-requires-vs-requirements/#requirements-files\r\n\r\nIn the future we should make `setup.py` use `requirements.in` instead of duplicating the list.",
">KFP depends on absl-py:\r\n\r\nIt looks like a new dependency. Do we need it much?",
"> * when clicking visualizations on `examplevalidator`, I'm seeing error: `NotFoundError: The specified path gs://gongyuan-test/tfx_taxi_simple/6f9ad66d-2897-4e11-8c2c-e3c614920f35/ExampleValidator/anomalies/32/anomalies.pbtxt was not found.`\r\n\r\nDoes this file exist? What does the validator component produce?",
"> > I want to understand where requirements.txt is used in this case? A KFP SDK user doesn't need to install from requirements.txt.\r\n> \r\n> It's used in automated tests and documentation generation. Some users might be using it as well. `requirements.txt` provides a known good dependency snapshot while `requirements.in` or `setup.py` specify loose top layer dependencies.\r\n> See the official documentation: https://packaging.python.org/discussions/install-requires-vs-requirements/#requirements-files\r\n> \r\n\r\nMy understanding is that `setup.py` is for end users, while `requirements.txt` is for developers. So my question was more like do we need to install `requirements.txt` during tests? I could imagine we need `requirements-test.txt` for test library dependencies. But other than that, shouldn't we just rely on `setup.py` for runtime dependencies? This will test against what end users get rather than a stable known-good environment -- I think test is meant to discover possible end user issues earlier.\r\n\r\n> In the future we should make `setup.py` use `requirements.in` instead of duplicating the list. \r\n\r\nI thought we're deprecating `requirements.in` with the recently introduced bot that updates `requirements.txt` automatically?",
"> > KFP depends on absl-py:\r\n> \r\n> It looks like a new dependency. Do we need it much?\r\n\r\nIt was recently introduced, but seems likely only for logging purpose. So I think it's avoidable at least for now.\r\n\r\nhttps://github.com/kubeflow/pipelines/blob/665f3ce8ba0ac0cada2b4a1bd6c5bf8414d99bcb/sdk/python/kfp/dsl/artifact.py#L17\r\n\r\nhttps://github.com/kubeflow/pipelines/blob/665f3ce8ba0ac0cada2b4a1bd6c5bf8414d99bcb/sdk/python/kfp/containers/entrypoint.py#L17",
">My understanding is that setup.py is for end users, while requirements.txt is for developers.\r\n\r\nI'm not sure there is a hard distinction like this. As a snapshot of tested dependencies, `requirements.txt` can still be useful for the users. For example, TFX setup if often broken due to self-conflicting dependencies. However a `requirements.txt` can be installed.\r\nSame with the KFP and tests: with `requirements.txt` there is a tested configuration. Unlike setup.py it's not affected by the environment and already installed packages. Before we started using it, we had issues when CI was flaky or installed different versions of packages at random. Especially when testing on different versions of python. Now we have the full package snapshot that it tested and a way to update that snapshot.\r\n\r\n>This will test against what end users get\r\n\r\nI'm not sure about this. Without requirements.txt our test environment can be very different from the user environments. The user can have very old packages, but our tests will have the latest ones. `requirements.txt` fixes that by letting the users install the same packages as us.\r\n\r\n>I thought we're deprecating requirements.in with the recently introduced bot that updates requirements.txt automatically?\r\n\r\nWhere does the bot get the initial requirements? Many guides advice putting the dependecies outside the setup.py file.",
"Thanks, then absl problem is solved, I verified 0.10.0 is compatible with both.\r\n\r\nLet me provide more information about the visualization one",
"Another new issue is that postsubmit for iris pipeline is failing after upgrade, we need to figure out if the pipeline file needs any update.\r\n\r\nEDIT:\r\n\r\nThe error message is: `No such file or directory: '/tfx-src/tfx/examples/iris/iris_utils_native_keras.py'`\r\n\r\n```\r\n/tfx/src/tfx/examples# ls\r\n__init__.py bert chicago_taxi_pipeline containers imdb penguin\r\nairflow_workshop bigquery_ml cifar10 custom_components mnist ranking\r\n```\r\n\r\nI found that iris example is either deleted or renamed.\r\n\r\nConfirmed, iris example is removed, and replaced by penguin: https://github.com/tensorflow/tfx/tree/master/tfx/examples/penguin",
">Let me provide more information about the visualization one\r\n\r\nThank you.\r\n\r\nVisualization can be a tough one. It bothers me that our visualizations are version-dependent, but are baked into the backend. We should try to move to the \"visualizations as components\" vision, so that the visualizations can be versioned independently.",
"> > My understanding is that setup.py is for end users, while requirements.txt is for developers.\r\n> \r\n> I'm not sure there is a hard distinction like this. \r\n\r\nRight, I didn't mean there's a hard distinction between the two, but just my personal experience/observation -- if I'm a KFP user, I only care about `pip install kfp`, and probably never bother to go to this GitHub repo and do an additional install step using the `requirements.txt`. \r\nMy point is that what's specified in `setup.py` is the single source of truth for an end user (as I don't think most of them would come and grab the requirements.txt from our GitHub repo). So the tests should try to simulate that as close as possible. Having a safe snapshot reduces or even eliminates test flakiness, but we also lose the opportunity to discover possible dependency issues an end user may hit.\r\n\r\n> >This will test against what end users get\r\n> \r\n> I'm not sure about this. Without requirements.txt our test environment can be very different from the user environments. The user can have very old packages, but our tests will have the latest ones. requirements.txt fixes that by letting the users install the same packages as us.\r\n\r\nI think in this case our tests at least has what a fresh installation of `kfp` has, which is good enough, and sufficient to find the latest incompatible issues.\r\n\r\n> > I thought we're deprecating requirements.in with the recently introduced bot that updates requirements.txt automatically?\r\n>\r\n> Where does the bot get the initial requirements? Many guides advice putting the dependecies outside the setup.py file.\r\n\r\nAccording to [this](https://github.com/kubeflow/pipelines/pull/5056#issuecomment-770346410), it looks at the existing `requirements.txt` and keep moving forward to the next available minor version.\r\n",
"@chensun @Ark-kun two PRs pending review: #5186 #5187 ",
"Found a new problem, KFP cache webhook starts to cache tfx pipelines.\r\n\r\nEDIT: Sent out a fix: https://github.com/kubeflow/pipelines/pull/5188",
"I'm renaming the issue, because we are using it as a collector for all tfx issues.",
"All immediate blockers for release have been resolved. I'm going to create separate issues that we can tackle later."
] | 2021-02-15T10:53:53 | 2021-02-25T11:09:29 | 2021-02-25T11:09:29 | NONE | null | ### What steps did you take:
Installed Kubeflow Pipelines on GCP via kustomize manifests.
Tried to run the Taxi TFX Demo.
### What happened:
On the first step, I got the error "No module named 'tfx.dsl.components'"
### What did you expect to happen:
To successfully run the TFX Taxi Demo.
### Environment:
How did you deploy Kubeflow Pipelines (KFP)?
Via the kustomize manifests in GCP.
KFP version: 1.4.0-rc.1
/kind bug
/area backend | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5137/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5136 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5136/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5136/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5136/events | https://github.com/kubeflow/pipelines/issues/5136 | 808,166,096 | MDU6SXNzdWU4MDgxNjYwOTY= | 5,136 | [Discuss - frontend best practice] one way data flow | {
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2152751095,
"node_id": "MDU6TGFiZWwyMTUyNzUxMDk1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/frozen",
"name": "lifecycle/frozen",
"color": "ededed",
"default": false,
"description": null
}
] | open | false | null | [] | null | [
"/cc @zijianjoy @StefanoFioravanzo ",
"Another partially related problem is the Page abstraction: https://github.com/kubeflow/pipelines/blob/1f32e90ecd1fe8657c084d623b2fcd23ead13c48/frontend/src/pages/Page.tsx#L40-L99\r\n\r\n## Recap of what problem the Page abstraction tries to solve\r\n\r\nThis is a standard KFP page:\r\n![image](https://user-images.githubusercontent.com/4957653/107920187-80705880-6fa7-11eb-9029-79d0435678a9.png)\r\n\r\nSome UI elements are common:\r\n* title\r\n* back button\r\n* breadcrumbs\r\n* buttons in toolbar\r\n\r\nIn order to reuse this design and logic for all KFP pages, they are built as root of router in https://github.com/kubeflow/pipelines/blob/1f32e90ecd1fe8657c084d623b2fcd23ead13c48/frontend/src/components/Router.tsx#L230.\r\n\r\nThis is a reasonable choice, because in the DOM tree, these items are close to the root. So callbacks to control these common elements are passed using the Page interface to each page in a route.\r\n\r\n## Several problems with Page\r\n\r\n* Page uses inheritance (deprecated) to reuse some common logic and build a common interface.\r\n* Callbacks to control global state: `updateToolbar`, `updateBanner` are passed to Page. The interface for controlling toolbar state is imperative, rather than declarative. After each state update, we need to call `updateToolbar` in a child component. e.g. [_selectionChanged](https://github.com/kubeflow/pipelines/blob/1f32e90ecd1fe8657c084d623b2fcd23ead13c48/frontend/src/pages/ExperimentDetails.tsx#L376-L414) handler.\r\n* Tests used Page interface implementation details pervasively, e.g. https://github.com/kubeflow/pipelines/blob/1f32e90ecd1fe8657c084d623b2fcd23ead13c48/frontend/src/pages/ExperimentDetails.test.tsx#L412-L415 and https://github.com/kubeflow/pipelines/blob/1f32e90ecd1fe8657c084d623b2fcd23ead13c48/frontend/src/pages/ExperimentDetails.test.tsx#L385 Therefore, it's a lot of work to refactor these components, all tests that use these implementation details need to be refactored at the same time.\r\n\r\n## Why it's still like this?\r\n\r\nBecause\r\n* the efforts needed to refactor (including the tests) is huge\r\n* the implementation sort of scales OK for KFP current use-cases\r\n* the problems they incur are more of aesthetic & tech debt than productivity (at least until today)\r\n\r\nTherefore, I have never thought a refactoring is of enough priority, considering the number of other more meaningful ways we can improve KFP.\r\n",
"## How I would have implemented it?\r\n\r\nFrom my past experience, more idiomatic way of implementing these is:\r\n* Build standard layout & atomic components for the common page features. A layout component can accept either [render props](https://reactjs.org/docs/render-props.html) or react element, so that what's rendered inside the layout can still be determined by page component. These components should better be stateless and controlled by their parents.\r\n* Let each page component use these common elements to implement pages.\r\n* Build page component tests using react testing library, rather than accessing component instance and methods. (as documented in https://github.com/kubeflow/pipelines/issues/5118#issuecomment-776177682)\r\n\r\nBenefits:\r\n* All the elements are easily reusable and composable.\r\n* Page component rendering logic can be declarative (e.g. which buttons exist on the page can be derived from state of the page, instead of controlled imperatively via callback).\r\n\r\nNote, I think these only apply to Banners, Toolbars, Breadcrumbs and Title. Snackbar and dialogs might need to be rendered across pages, so they are probably not a good fit.\r\n\r\n## Plan\r\n\r\nIf we agree on the ideas, what we can do is building new pages using this more idiomatic paradigm. I still don't think it's worth it to rewrite the entire KFP UI just for this refactoring. If there are other reasons we need to build/rebuild pages, that will be good chances to start applying the new practices.",
"Thank you for the detailed explanation of the concept! I agree with using render props for reusable and portable components on the common UI elements on the top of pages. One question: The UI elements listed are unified across different pages, except buttons in toolbar. Each page will have different set of buttons, and based on user action on children element, these buttons might change on the same page. How does each layer look like if we apply render props for button elements? ",
"@zijianjoy I built a quick demo: https://codesandbox.io/s/magical-violet-hwfxb?file=/src/App.js\r\n\r\nNotice how the Layout component defines layout of the page, while App component can inject stuff into the Layout component dynamically using either render prop or react elements as props.\r\n\r\nSo that, back to KFP UI, we can define a common PageLayout component that gets reused across each Page, and the pages can control what buttons to render in layout slots.",
"@Bobgy Thank you so much Yuan for building the sample demo! \r\n\r\nTo confirm if I understand it correctly, here is a simplified example for KFP UI - RunList:\r\n\r\n```\r\n<Page>\r\n <ExperimentDetail>\r\n <PageLayout />\r\n <RunsList>\r\n <CustomTable />\r\n </RunList>\r\n </ExperimentDetail>\r\n</Page>\r\n```\r\n\r\nIn the above layout, `RunsList` determines what buttons to show (for example, Archive/Activate/Clone runs), and `CustomTable` determines which buttons are enabled/disabled because of user selection, and `RunsList` collects **run selection** callback from `CustomTable`. `ExperimentDetail` Collects button information from `RunsList` callbacks, and updates `PageLayout` accordingly. In such case, can I assume that `ExperimentDetail` determines what buttons should show in `PageLayout`? Because `ExperimentDetail` collects all button related information from children, and use render props to render `PageLayout`. \r\n\r\n\r\nAs reference, the `Page` component looks like below:\r\n\r\n```\r\n render() {\r\n return (\r\n <div>\r\n <ExperimentDetail render={buttonState => (\r\n <PageLayout buttonState={buttonState} />\r\n )}/>\r\n </div>\r\n );\r\n }\r\n```",
"```\n <ExperimentDetail>\n <PageLayout />\n <RunsList>\n <CustomTable />\n </RunList>\n </ExperimentDetail>\n```\n\nPage is unnecessary, files in the pages folder are already pages.\n\nThe following understanding has a key difference, because button management can be very dynamic, the most declarative way is to move state that can affect buttons up to the page level and render the buttons in layout from e.g. move state in RunList up to ExperimentDetails's state, and render buttons directly from ExperimentDetails.\nhttps://reactjs.org/docs/lifting-state-up.html\n\nMoving state up might sound daunting at first, but React has also built React hooks, they allow making abstractions on state logic, so that moving some state can be as simple as moving a hook call from child to parent and pass needed props to the child.",
"I agree with moving the state up to parent so the parent has all the information it needs to determine buttons arrangement. Thank you Yuan!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"/lifecycle frozen"
] | 2021-02-15T04:57:02 | 2021-08-24T13:52:30 | null | CONTRIBUTOR | null | https://reactjs.org/docs/refs-and-the-dom.html
It's recommended to avoid using ref when possible, because the data/event flow mechanism conflicts with React's recommendation of one-way data flow philosophy (see https://reactjs.org/docs/thinking-in-react.html).
> For example, instead of exposing open() and close() methods on a Dialog component, pass an isOpen prop to it.
Similar to this example, I think to avoid exposing `refresh()` as a public method, we could have a prop called sth like `refreshCounter: number`, the child components can do a refresh whenever it sees an update in refreshCounter -- using `componentDidUpdate` or `useEffect react hook`. In this way, we no longer need to add refs all the way and trigger a method to refresh components, one refresh would be as simple as a state update to the refreshCounter.
We'd also need a `onLoad` callback, that will be triggered by child components when a page finishes loading.
After avoiding using ref to get component instances, we can write components using functions and react hooks: https://reactjs.org/docs/hooks-overview.html. It's a nice way to make abstraction on state changes (and a lot more!).
_Originally posted by @Bobgy in https://github.com/kubeflow/pipelines/pull/5040#discussion_r569956232_ | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5136/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5136/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5134 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5134/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5134/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5134/events | https://github.com/kubeflow/pipelines/issues/5134 | 807,736,947 | MDU6SXNzdWU4MDc3MzY5NDc= | 5,134 | Presubmit failure | {
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-02-13T12:06:23 | 2021-02-14T00:31:58 | 2021-02-14T00:31:58 | CONTRIBUTOR | null | If you look into the test it said
```
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "/usr/local/lib/python3.6/site-packages/kfp/__init__.py", line 24, in <module>
from ._client import Client
File "/usr/local/lib/python3.6/site-packages/kfp/_client.py", line 31, in <module>
from kfp.compiler import compiler
File "/usr/local/lib/python3.6/site-packages/kfp/compiler/__init__.py", line 17, in <module>
from ..containers._component_builder import build_python_component, build_docker_image, VersionedDependency
File "/usr/local/lib/python3.6/site-packages/kfp/containers/_component_builder.py", line 32, in <module>
from kfp.containers import entrypoint
File "/usr/local/lib/python3.6/site-packages/kfp/containers/entrypoint.py", line 23, in <module>
from kfp.containers import entrypoint_utils
File "/usr/local/lib/python3.6/site-packages/kfp/containers/entrypoint_utils.py", line 23, in <module>
from kfp.pipeline_spec import pipeline_spec_pb2
File "/usr/local/lib/python3.6/site-packages/kfp/pipeline_spec/pipeline_spec_pb2.py", line 23, in <module>
create_key=_descriptor._internal_create_key,
AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key'
```
Looks like the `protobuf` version is not matching in this case. @Bobgy are you aware of this error? Thanks.
_Originally posted by @Tomcli in https://github.com/kubeflow/pipelines/pull/5059#issuecomment-777656530_
/cc @numerology @chensun @Ark-kun
Can you take a look at this issue? I have seen multiple reports, this error seems to fail consistently. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5134/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5133 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5133/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5133/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5133/events | https://github.com/kubeflow/pipelines/issues/5133 | 807,511,449 | MDU6SXNzdWU4MDc1MTE0NDk= | 5,133 | Kubeflow Pipelines: Move manifests development upstream | {
"login": "yanniszark",
"id": 6123106,
"node_id": "MDQ6VXNlcjYxMjMxMDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanniszark",
"html_url": "https://github.com/yanniszark",
"followers_url": "https://api.github.com/users/yanniszark/followers",
"following_url": "https://api.github.com/users/yanniszark/following{/other_user}",
"gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions",
"organizations_url": "https://api.github.com/users/yanniszark/orgs",
"repos_url": "https://api.github.com/users/yanniszark/repos",
"events_url": "https://api.github.com/users/yanniszark/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanniszark/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "yanniszark",
"id": 6123106,
"node_id": "MDQ6VXNlcjYxMjMxMDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanniszark",
"html_url": "https://github.com/yanniszark",
"followers_url": "https://api.github.com/users/yanniszark/followers",
"following_url": "https://api.github.com/users/yanniszark/following{/other_user}",
"gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions",
"organizations_url": "https://api.github.com/users/yanniszark/orgs",
"repos_url": "https://api.github.com/users/yanniszark/repos",
"events_url": "https://api.github.com/users/yanniszark/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanniszark/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "yanniszark",
"id": 6123106,
"node_id": "MDQ6VXNlcjYxMjMxMDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanniszark",
"html_url": "https://github.com/yanniszark",
"followers_url": "https://api.github.com/users/yanniszark/followers",
"following_url": "https://api.github.com/users/yanniszark/following{/other_user}",
"gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions",
"organizations_url": "https://api.github.com/users/yanniszark/orgs",
"repos_url": "https://api.github.com/users/yanniszark/repos",
"events_url": "https://api.github.com/users/yanniszark/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanniszark/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thank you!\n\n> I understand that the installs folder would probably be transferred under env in this repo.\n\nLet me think about it, there might be too many types and they are not conceptually consistent. Current manifests in env all include mysql and minio, but those in kubeflow/manifest/.../installs don't.\n\n> What about all the other folders? Are they maintained by KFP? Are they abandoned? cc\n\nNo, you can remove the other folders.",
"> Let me think about it, there might be too many types and they are not conceptually consistent\\\r\n\r\n@Bobgy I was wondering if you had any thoughts on this issue. I can start a PR making `installs` into `envs`. What do you think?",
"My current preference is to move installs into kustomize/base/installs/{generic,multi-user}\n(Probably rename base to core, but that can be done later)\n\nMove mysql and minio into kustomize/storage/mysql, kustomize/storage/minio.\n\nAnd, use kustomize/env/xxx to compose them.\n\nWhat do you think?\nThe main motivation is to make component role & boundary clearer.",
"@yanniszark pinging for status update, I want to make sure we don't have duplicate efforts",
"@Bobgy thanks for the ping. I want to ask for more info on your suggestion.\r\nYou proposed:\r\n\r\n> kustomize/base/installs/{generic,multi-user}\r\n\r\nCurrently, the base folder is broken up per-component (application. argo, cache, metadata, pipeline).\r\nThis includes both pipelines (pipelines, cache) and non-pipelines apps (argo, application-controller, metadata).\r\nThose are composed into a kustomization in `base/kustomization.yaml`. In addition, `base/kustomization.yaml` only contains namespaced resources.\r\n\r\nThe multi-user pipelines manifests adds/edits resources to pipelines components (api-service, cache, metadata-writer, persistence-agent, pipelines-profile-controller, pipelines-ui, scheduled-workflow, viewer-controller).\r\n\r\nSo if I understand correctly:\r\n- `base/kustomization.yaml` will become `base/installs/generic`\r\n- `base/installs/multi-user` will contain the resources for multi-user pipelines. However, these resources are both cluster and namespace-scoped. Personally, I don't see the reason for the namespaced/non-namespaced distinction but I see there was some quite some effort to structure this way, so I'd love to know more. Alternatively, we could do `base/installs/multi-user/cluster-scoped` too.\r\n\r\nFinally, what about the tekton installs? Should they go under `env`?\r\n\r\n",
"That reminds me a few things,\n`application` should now be moved out as optional, because we don't need it in kubeflow.\n\n> Personally, I don't see the reason for the namespaced/non-namespaced distinction\n\nBecause in namespaced installation mode, cluster admin can be a separate group of people than namespace admins. So it's more convenient if there's a clear separation of cluster resources.\n\nTherefore, this requirement isn't necessary for multi-user mode, because it's already meant to be shared across namespaces.",
"For tekton KFP, @animeshsingh what do you think?\n\nMy current thoughts are manifest should be next to where development is happening, so maintainence is easier if they are in kfp-tekton repo.",
"Fixed by #5256 "
] | 2021-02-12T20:01:48 | 2021-03-10T01:51:48 | 2021-03-10T01:51:48 | CONTRIBUTOR | null | ### Background
Umbrella-Issue: https://github.com/kubeflow/manifests/issues/1740
As part of the work of wg-manifests for 1.3
(https://github.com/kubeflow/manifests/issues/1735), we are moving manifests
development to upstream repos. This gives the application developers full
ownership of their manifests, tracked in a single place.
To give an example from the `notebook-controller`, the diagram shows the current
situation. Manifests are in two repos, blurring separation of responsibilities
and making things harder for application developers.
![diagram_1](https://user-images.githubusercontent.com/6123106/107092187-c9881600-680b-11eb-9993-fc04c5e5b6dd.PNG)
Instead, we will copy all manifests from the manifests repo back to each
upstream's repo. From there, they will be imported in the manifests repo. The
following diagram presents the desired state:
![diagram_2](https://user-images.githubusercontent.com/6123106/107092197-cd1b9d00-680b-11eb-8123-7fa5424c5cb2.PNG)
### Current State
Kubeflow Pipelines has manifests:
- In the manifests repo, under folder `apps/pipeline/upstream`.
- In upstream repo (https://github.com/kubeflow/pipelines) under folder `manifests/kustomize`.
The current state of the manifests repo and the upstream repo is the following:
![image (8)](https://user-images.githubusercontent.com/6123106/107816463-78cd6b80-6d7d-11eb-94bb-b33a89cf43de.png)
The kubeflow/manifests part of pipelines consists of the following:
- An `upstream` folder, which contains the copied upstream manifests from kubeflow/manifests.
- A kustomization for the pipelines cache (cache folder).
- The `installs` folder, which contains various installation profiles for pipelines (generic, multi-user, tekton, multi-user-tekton). The kustomizations under `installs` refer to the `cache` folder and the `upstream` folder.
- A bunch of folders (`api-service`, `minio`, `mysql`, `persistent-agent`, `pipeline-visualization-service`, `pipelines-runner`, `pipelines-ui`, `pipelines-viewer`, `scheduledworkflow`) which seem unused, since those manifests already exist upstream (most are in kubeflow/pipelines under `manifests/kustomize/base/pipeline`, `minio` and `mysql` are under `manifests/kustomize/env/platform-agnostic`)
I understand that the `installs` folder would probably be transferred under `env` in this repo.
What about all the other folders? Are they maintained by KFP? Are they abandoned? cc @Bobgy
### Desired State
The proposed folder in the upstream repo to receive the manifests is:
`manifests/kustomize`.
The goal is to consolidate all manifests development in the upstream repo.
The manifests repo will include a copy of the manifests under `apps/pipeline/upstream`.
This copy will be synced periodically.
### Success Criteria
The manifests in `apps/pipeline/upstream` should be a copy of the upstream manifests
folder. To do that, the application manifests in `kubeflow/manifests` should be
moved to the upstream application repo.
/assign @yanniszark
cc @kubeflow/wg-pipeline-leads | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5133/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5133/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5215 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5215/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5215/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5215/events | https://github.com/kubeflow/pipelines/issues/5215 | 818,615,384 | MDU6SXNzdWU4MTg2MTUzODQ= | 5,215 | Pipeline metrics are always sorted alphabetically | {
"login": "ypitrey",
"id": 17247240,
"node_id": "MDQ6VXNlcjE3MjQ3MjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/17247240?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ypitrey",
"html_url": "https://github.com/ypitrey",
"followers_url": "https://api.github.com/users/ypitrey/followers",
"following_url": "https://api.github.com/users/ypitrey/following{/other_user}",
"gists_url": "https://api.github.com/users/ypitrey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ypitrey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ypitrey/subscriptions",
"organizations_url": "https://api.github.com/users/ypitrey/orgs",
"repos_url": "https://api.github.com/users/ypitrey/repos",
"events_url": "https://api.github.com/users/ypitrey/events{/privacy}",
"received_events_url": "https://api.github.com/users/ypitrey/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 930476737,
"node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted",
"name": "help wanted",
"color": "db1203",
"default": true,
"description": "The community is welcome to contribute."
},
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
},
{
"id": 2186355346,
"node_id": "MDU6TGFiZWwyMTg2MzU1MzQ2",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/good%20first%20issue",
"name": "good first issue",
"color": "fef2c0",
"default": true,
"description": ""
}
] | closed | false | null | [] | null | [
"/area pipelines\r\n@Bobgy maybe you can move this over to the pipelines repo",
"Any updates on this issue? I'll have to name my metrics `1. dropsPerHour` and `2. unscheduled` for them to appear at the top of the list for now, but I fear the gods of software design are going to be angry with me. 😄 \r\n\r\nThanks guys",
"We are open for contributions, here's frontend development guide: https://github.com/kubeflow/pipelines/tree/master/frontend.\r\n\r\nAnd I believe related code to this issue is in https://github.com/kubeflow/pipelines/blob/45c5c18716b57fbf9d491d88ab2fe7e345dc7edb/frontend/src/pages/RunDetails.tsx#L584.",
"Hi @Bobgy @ypitrey, a new contributor here looking for a good first issue to work on. \r\nI like to work on this issue if this is still relevant & have a few questions to clarify the ask.\r\n\r\nIs the ask to simply remove the sorting of metrics or to create a functionality that a user can choose the metrics that show up on the UI? \r\n\r\n> Note: I need to expose more than these two metrics, as I am using these metrics to aggregate results across multiple runs.\r\n\r\nIn addition, I think you're looking for a way to show more than two metrics? Should this be part of this issue?",
"Thank you @annajung !\nThis is still relevant, it seems removing sorting would be enough.",
"Thanks for the clarification @Bobgy \r\nCreated a PR https://github.com/kubeflow/pipelines/pull/5701, any feedback would be appreciated! Thanks"
] | 2021-02-12T16:51:23 | 2021-05-27T02:38:17 | 2021-05-27T02:38:17 | NONE | null | /kind bug
I have a pipeline that exposes pipeline metrics following [this example](https://www.kubeflow.org/docs/pipelines/sdk/pipelines-metrics/). It's working great, but it seems that the metrics always appear sorted alphabetically. I would like to make use of the fact that the first two metrics are displayed next to each run in the run list view, but the two metrics I'm interested in are not the first two in alphabetical order.
Note: I need to expose more than these two metrics, as I am using these metrics to aggregate results across multiple runs.
Am I doing something wrong? Is there a way to not sort the metrics alphabetically?
For context, here is the code I'm using to expose the metrics:
```
def write_metrics_data(kpis: dict):
metrics = {
'metrics': [
{
'name': name,
'numberValue': value,
'format': "PERCENTAGE" if 'percent' in name else "RAW",
}
for name, value in kpis.items()
]
}
myjson.write(metrics, Path('/mlpipeline-metrics.json'))
```
and this is the header row as displayed in the _Run Output_ tab:
```
activeTrips | dropsPerHour | idleHours | kilometresPerDrop | kilometresPerHour | percentReloads | ...
```
which seems to be in alphabetical order. However, if I do:
```
print(kpis.keys())
```
I get this:
```
['dropsPerHour', 'unscheduled', 'activeTrips', 'totalHours', ...]
```
which isn't in alphabetical order. And the two metrics I'm interested in are the first two ones in this list.
**Environment:**
- Kubeflow version: 1.0.4
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5215/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5130 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5130/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5130/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5130/events | https://github.com/kubeflow/pipelines/issues/5130 | 807,199,813 | MDU6SXNzdWU4MDcxOTk4MTM= | 5,130 | Error pulling gcr.io/ml-pipeline/ml-pipeline-gcp:1.4.0 | {
"login": "BorFour",
"id": 25534385,
"node_id": "MDQ6VXNlcjI1NTM0Mzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/25534385?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BorFour",
"html_url": "https://github.com/BorFour",
"followers_url": "https://api.github.com/users/BorFour/followers",
"following_url": "https://api.github.com/users/BorFour/following{/other_user}",
"gists_url": "https://api.github.com/users/BorFour/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BorFour/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BorFour/subscriptions",
"organizations_url": "https://api.github.com/users/BorFour/orgs",
"repos_url": "https://api.github.com/users/BorFour/repos",
"events_url": "https://api.github.com/users/BorFour/events{/privacy}",
"received_events_url": "https://api.github.com/users/BorFour/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hey @BorFour, sorry about the inconvenience, there was a short period the image was not built yet.\r\n\r\nYou should be able to use it now.\r\n\r\nThe best source is to look at github releases, if the release note isn't appended, that means the release was not yet finished."
] | 2021-02-12T12:32:56 | 2021-02-25T08:34:33 | 2021-02-25T08:34:33 | NONE | null | I've been using a component that uses the image `gcr.io/ml-pipeline/ml-pipeline-gcp:1.4.0`. When I try to use it, the component's pod seems to be stuck with the `ImagePullBackOff` status and, eventually, the pipeline times out. I then tried to run the image locally like so:
```bash
docker pull gcr.io/ml-pipeline/ml-pipeline-gcp:1.4.0
```
And then I get this error message:
```bash
Error response from daemon: manifest for gcr.io/ml-pipeline/ml-pipeline-gcp:1.4.0 not found: manifest unknown: Failed to fetch "1.4.0" from request "/v2/ml-pipeline/ml-pipeline-gcp/manifests/1.4.0".
```
I then tried to pull `gcr.io/ml-pipeline/ml-pipeline-gcp:1.3.0` locally and it works just fine. My workaround was to use the same component but with the repo tag `1.3.0`, as it uses this image, but I can't figure out why 1.4.0 doesn't work.
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5130/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5129 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5129/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5129/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5129/events | https://github.com/kubeflow/pipelines/issues/5129 | 807,145,970 | MDU6SXNzdWU4MDcxNDU5NzA= | 5,129 | Support textarea input for pipeline parameter | {
"login": "kim-sardine",
"id": 8458055,
"node_id": "MDQ6VXNlcjg0NTgwNTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8458055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kim-sardine",
"html_url": "https://github.com/kim-sardine",
"followers_url": "https://api.github.com/users/kim-sardine/followers",
"following_url": "https://api.github.com/users/kim-sardine/following{/other_user}",
"gists_url": "https://api.github.com/users/kim-sardine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kim-sardine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kim-sardine/subscriptions",
"organizations_url": "https://api.github.com/users/kim-sardine/orgs",
"repos_url": "https://api.github.com/users/kim-sardine/repos",
"events_url": "https://api.github.com/users/kim-sardine/events{/privacy}",
"received_events_url": "https://api.github.com/users/kim-sardine/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 930476737,
"node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted",
"name": "help wanted",
"color": "db1203",
"default": true,
"description": "The community is welcome to contribute."
},
{
"id": 930619516,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTY=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend",
"name": "area/frontend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
},
{
"id": 2186355346,
"node_id": "MDU6TGFiZWwyMTg2MzU1MzQ2",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/good%20first%20issue",
"name": "good first issue",
"color": "fef2c0",
"default": true,
"description": ""
}
] | closed | false | null | [] | null | [
"Hi, this is a nice idea! Sounds useful to me\r\n\r\nWelcome contribution on this!\r\n\r\nyou can find frontend contribution guide in https://github.com/kubeflow/pipelines/tree/master/frontend,\r\nand the create run page is implemented in https://github.com/kubeflow/pipelines/blob/master/frontend/src/pages/NewRun.tsx",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-02-12T11:07:27 | 2022-04-18T17:27:55 | 2022-04-18T17:27:55 | CONTRIBUTOR | null | ### What steps did you take:
I'd like to get multi-line bash script from user and pass it into ContainerOp.
### What happened:
but for now, Pipeline frontend only support two types of pipeline parameter - one line text input, json editor.
of course I can do it by appending all lines with semi-colon or by using list type and joining dynamically, but it's not intuitive.
but I'm not sure how to make pipeline code tell frontend to use textarea input. beacuse there is no type hint for multi line string.
maybe we could use custom type or use textarea input as default
### Environment:
kfp_istio_dex_1.2.0 on Windows Docker Desktop
/kind feature
/area frontend
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5129/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5129/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5126 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5126/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5126/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5126/events | https://github.com/kubeflow/pipelines/issues/5126 | 806,604,398 | MDU6SXNzdWU4MDY2MDQzOTg= | 5,126 | Can not load pipeline | {
"login": "dwu926",
"id": 52472341,
"node_id": "MDQ6VXNlcjUyNDcyMzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/52472341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwu926",
"html_url": "https://github.com/dwu926",
"followers_url": "https://api.github.com/users/dwu926/followers",
"following_url": "https://api.github.com/users/dwu926/following{/other_user}",
"gists_url": "https://api.github.com/users/dwu926/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dwu926/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwu926/subscriptions",
"organizations_url": "https://api.github.com/users/dwu926/orgs",
"repos_url": "https://api.github.com/users/dwu926/repos",
"events_url": "https://api.github.com/users/dwu926/events{/privacy}",
"received_events_url": "https://api.github.com/users/dwu926/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | closed | false | null | [] | null | [
"Can you answer the environment questions? I need more information to scope down the problem",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-02-11T17:41:17 | 2022-04-28T18:00:33 | 2022-04-28T18:00:33 | NONE | null | ### What steps did you take:
When I tried to upload the pipeline which is in format of "pipeline.tar.gz" and there is always an error and can not load the pipeline.
### What happened:
Pipeline version creation failed
upstream connect error or disconnect/reset before headers. reset reason: connection termination
### What did you expect to happen:
### Environment:
<!-- Please fill in those that seem relevant. -->
How did you deploy Kubeflow Pipelines (KFP)?
<!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). -->
KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. -->
KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp -->
### Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind bug
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5126/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5125 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5125/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5125/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5125/events | https://github.com/kubeflow/pipelines/issues/5125 | 806,211,828 | MDU6SXNzdWU4MDYyMTE4Mjg= | 5,125 | Upgrade gorm | {
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 930619513,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTM=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/priority/p1",
"name": "priority/p1",
"color": "cb03cc",
"default": false,
"description": ""
},
{
"id": 1682717397,
"node_id": "MDU6TGFiZWwxNjgyNzE3Mzk3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/process",
"name": "kind/process",
"color": "2515fc",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | closed | false | null | [] | null | [
"@Bobgy \r\nBy the way, I noticed that we are not using the main gorm repository ```\"gorm.io/gorm\"```, this will lead to conflict in the future. It should probably become a future refactoring task for the KFP team. \r\n\r\nFor example the way to add a composite unique index is shows as below, but this gave me a syntax error with the current gorm repository being used... ```\"github.com/jinzhu/gorm\"```\r\n\r\n```golang\r\npackage main\r\n\r\nimport (\r\n\t\"gorm.io/driver/mysql\"\r\n\t\"gorm.io/gorm\"\r\n)\r\n\r\n// ALTER TABLE dev.pipelines ADD UNIQUE `unique_index`(`Name`, `Namespace`);\r\ntype Pipeline struct {\r\n\tgorm.Model\r\n\tNamespace string `gorm:\"column:Namespace; size:63; default:''; index:idx_name_ns, unique\"`\r\n\tName string `gorm:\"column:Name; not null; index:idx_name_ns\"`\r\n}\r\n\r\nfunc main() {\r\n\t// db, err := gorm.Open(sqlite.Open(\"test.db\"), &gorm.Config{})\r\n\tdsn := \"root:@tcp(localhost:3306)/dev?charset=utf8mb4&parseTime=True&loc=Local\"\r\n\tdb, err := gorm.Open(mysql.Open(dsn), &gorm.Config{})\r\n\tif err != nil {\r\n\t\tpanic(\"failed to connect database\")\r\n\t}\r\n\r\n\t// Migrate the schema\r\n\tdb.AutoMigrate(&Pipeline{})\r\n\r\n\t// Create\r\n\tdb.Create(&Pipeline{Namespace: \"admin\", Name: \"p2\"})\r\n}\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-02-11T09:23:12 | 2022-04-18T17:27:59 | 2022-04-18T17:27:59 | CONTRIBUTOR | null |
@capri-xiyue The creation of a unique index with (Namespace, Name) is done in the client manager. The reason we can't do this as part of the Pipeline struct definition is because the KFP repository is using a branch of the gorm package ```"github.com/jinzhu/gorm"``` which does not support it, the refactoring for this is not in scope.
_Originally posted by @maganaluis in https://github.com/kubeflow/pipelines/pull/4835#discussion_r571721941_ | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5125/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5124 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5124/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5124/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5124/events | https://github.com/kubeflow/pipelines/issues/5124 | 805,513,534 | MDU6SXNzdWU4MDU1MTM1MzQ= | 5,124 | Data Versioning with Kubeflow | {
"login": "Vindhya-Singh",
"id": 20332927,
"node_id": "MDQ6VXNlcjIwMzMyOTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/20332927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vindhya-Singh",
"html_url": "https://github.com/Vindhya-Singh",
"followers_url": "https://api.github.com/users/Vindhya-Singh/followers",
"following_url": "https://api.github.com/users/Vindhya-Singh/following{/other_user}",
"gists_url": "https://api.github.com/users/Vindhya-Singh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vindhya-Singh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vindhya-Singh/subscriptions",
"organizations_url": "https://api.github.com/users/Vindhya-Singh/orgs",
"repos_url": "https://api.github.com/users/Vindhya-Singh/repos",
"events_url": "https://api.github.com/users/Vindhya-Singh/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vindhya-Singh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1682717392,
"node_id": "MDU6TGFiZWwxNjgyNzE3Mzky",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/question",
"name": "kind/question",
"color": "2515fc",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [
"Hi @VindhyaSRajan!\r\n\r\nDid you look into https://github.com/pachyderm/pachyderm? I think they also have some integration with KFP.\r\nKFP is very flexible to work with any external system by components.\r\n\r\nAny feedback on gaps?",
"Can KFP be set to take kubernetes snapshots after each step, and then pass the name of that snapshot as the `data_source` for the next step? If not, I think that is a solid place to start for having data versioning built into KFP. I have been working on code for Kale that, once working, should create pipelines that create snapshots after each step https://github.com/kubeflow-kale/kale/pulls. ",
"That's an interesting idea, how do you envision that being a 1st party feature? Does it need to?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Sorry, I completely missed your response. I’d need to think a bit about how this could be a first party feature, but I do think it is something people would be interested in. It would allow for data lineage which is needed in some environments (like research) and could help with debugging a pipeline. \r\n\r\nAm I right in thinking the implementation for a feature like this would greatly differ between KFP V1 and V2?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hey just wanted to comment on this to ask if there's any progress? I've a team of researchers and we're interested in using something like DVC in kfp. ",
"+1",
"+1",
"+1",
"+1",
"Any news or comment on this?"
] | 2021-02-10T13:28:33 | 2023-03-10T17:14:53 | null | NONE | null | Hello,
I am working on setting up an in-house ML infrastructure for my company and we decided to go with Kubeflow. We need to ensure that the pipelines have the provision for data versioning as well. I understand from the official documentation of Kubeflow that it is possible with Rok Data Management. However, we are interested in exploring other options as well. Thus, my question comes in two parts:
1. Is there any alternative to using Rok for data versioning with Kubeflow pipelines?
2. Is it possible to use DVC for data versioning with Kubeflow?
Thanks :) | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5124/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5124/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5123 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5123/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5123/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5123/events | https://github.com/kubeflow/pipelines/issues/5123 | 805,484,972 | MDU6SXNzdWU4MDU0ODQ5NzI= | 5,123 | [Multi User] failed to call 'kfp.get_run' in in-cluster juypter notebook | {
"login": "anneum",
"id": 60262966,
"node_id": "MDQ6VXNlcjYwMjYyOTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/60262966?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anneum",
"html_url": "https://github.com/anneum",
"followers_url": "https://api.github.com/users/anneum/followers",
"following_url": "https://api.github.com/users/anneum/following{/other_user}",
"gists_url": "https://api.github.com/users/anneum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anneum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anneum/subscriptions",
"organizations_url": "https://api.github.com/users/anneum/orgs",
"repos_url": "https://api.github.com/users/anneum/repos",
"events_url": "https://api.github.com/users/anneum/events{/privacy}",
"received_events_url": "https://api.github.com/users/anneum/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1682627575,
"node_id": "MDU6TGFiZWwxNjgyNjI3NTc1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/misc",
"name": "kind/misc",
"color": "c2e0c6",
"default": false,
"description": "types beside feature and bug"
}
] | closed | false | null | [] | null | [
"I have checked if the run has been correctly inserted into the `mlpipeline` database. I see both the run and the correct namespace.\r\n```\r\nkubectl -nkubeflow exec -it mysql-7694c6b8b7-nxn2h -- bash\r\nroot@mysql-7694c6b8b7-nxn2h:/# mysql\r\nmysql> use mlpipeline;\r\nmysql> select uuid, DisplayName, namespace, ServiceAccount from run_details where uuid = 'e5b4e73c-2709-41b0-af75-f9b9dcb372f2';\r\n+--------------------------------------+---------------------------------+-----------+----------------+\r\n| uuid | DisplayName | namespace | ServiceAccount |\r\n+--------------------------------------+---------------------------------+-----------+----------------+\r\n| e5b4e73c-2709-41b0-af75-f9b9dcb372f2 | candies-sharing-0s70d_run-13csv | mynamespace | default-editor |\r\n+--------------------------------------+---------------------------------+-----------+----------------+\r\n```",
"Long term solution should be https://github.com/kubeflow/pipelines/issues/5138"
] | 2021-02-10T12:49:34 | 2021-02-26T01:08:10 | 2021-02-26T01:08:10 | NONE | null | ### What steps did you take:
I have a notebook server in a multiuser environment with kale.
After fixing several bugs based on community comments (see below) I run into a new issue.
Added `ServiceRoleBinding` and `EnvoyFilter` as mentioned in https://github.com/kubeflow/pipelines/issues/4440#issuecomment-733702980
```
export NAMESPACE=mynamespace
export NOTEBOOK=mynotebook
export [email protected]
cat > ./envoy_filter.yaml << EOM
apiVersion: rbac.istio.io/v1alpha1
kind: ServiceRoleBinding
metadata:
name: bind-ml-pipeline-nb-${NAMESPACE}
namespace: kubeflow
spec:
roleRef:
kind: ServiceRole
name: ml-pipeline-services
subjects:
- properties:
source.principal: cluster.local/ns/${NAMESPACE}/sa/default-editor
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: add-header
namespace: ${NAMESPACE}
spec:
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: SIDECAR_OUTBOUND
routeConfiguration:
vhost:
name: ml-pipeline.kubeflow.svc.cluster.local:8888
route:
name: default
patch:
operation: MERGE
value:
request_headers_to_add:
- append: true
header:
key: kubeflow-userid
value: ${USER}
workloadSelector:
labels:
notebook-name: ${NOTEBOOK}
EOM
```
Added the namespace to `.config/kfp/context.json` as metioned in https://github.com/kubeflow-kale/kale/issues/210#issuecomment-727018461
Added `RoleBinding` as mentioned in https://github.com/kubeflow-kale/kale/issues/210#issuecomment-697231513
```
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: allow-workflow-nb-mynamespace
namespace: mynamespace
subjects:
- kind: ServiceAccount
name: default-editor
namespace: mynamespace
roleRef:
kind: ClusterRole
name: argo
apiGroup: rbac.authorization.k8s.io
EOF
```
### What happened:
The pipeline pods run successfully but I get an error in the Jupyter server. Additionally, in the experiment section I see the run but no graph is displayed.
```
2021-02-10 08:39:54 run:114 [[INFO]] [TID=82upm9iyob] [/home/jovyan/data-vol-1/examples/base/candies_sharing.ipynb] Executing RPC function 'get_run(run_id=e5b4e73c-2709-41b0-af75-f9b9dcb372f2)'
2021-02-10 08:39:54 run:125 [[ERROR]] [TID=82upm9iyob] [/home/jovyan/data-vol-1/examples/base/candies_sharing.ipynb] RPC function 'get_run' raised an unhandled exception
Traceback (most recent call last):
...
Reason: Internal Server Error
HTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'trailer': 'Grpc-Trailer-Content-Type', 'date': 'Wed, 10 Feb 2021 08:39:54 GMT', 'x-envoy-upstream-service-time': '1', 'server': 'envoy', 'transfer-encoding': 'chunked'})
HTTP response body: {"error":"Failed to authorize the request.: Failed to authorize with the run Id.: Failed to get namespace from run id.: InternalServerError: Failed to get run: invalid connection: invalid connection","message":"Failed to authorize the request.: Failed to authorize with the run Id.: Failed to get namespace from run id.: InternalServerError: Failed to get run: invalid connection: invalid connection","code":13,"details":[{"@type":"type.googleapis.com/api.Error","error_message":"Internal Server Error","error_details":"Failed to authorize the request.: Failed to authorize with the run Id.: Failed to get namespace from run id.: InternalServerError: Failed to get run: invalid connection: invalid connection"}]}
```
### What did you expect to happen:
I receive a status feedback when the pipeline runs successfully and see the graph in the experiment section.
### Environment:
<!-- Please fill in those that seem relevant. -->
How did you deploy Kubeflow Pipelines (KFP)?
<!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). -->
Full Kubeflow Deployment on an on-premise cluster.
KFP version: 1.0.4<!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. -->
KFP SDK version: kfp 1.4.0, kfp-pipeline-spec 0.1.5, kfp-server-api 1.3.0<!-- Please attach the output of this shell command: $pip list | grep kfp -->
### Anything else you would like to add:
`kfp pipeline list` executed on the notebook server.
```
kfp pipeline list
+--------------------------------------+-------------------------------------------------+---------------------------+
| Pipeline ID | Name | Uploaded at |
+======================================+=================================================+===========================+
| 9f04bfad-cad5-4967-bad3-bf7e2f0fe156 | candies-sharing-0s70d | 2021-02-10T08:39:05+00:00 |
+--------------------------------------+-------------------------------------------------+---------------------------+
```
#### Are there any plans to automatically create the currently manually created ServiceRoleBinding, EnvoyFilter and the RoleBinding in the future?
/kind bug
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5123/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5121 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5121/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5121/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5121/events | https://github.com/kubeflow/pipelines/issues/5121 | 805,189,626 | MDU6SXNzdWU4MDUxODk2MjY= | 5,121 | Failed to install KFP due to Error: Error 1054: Unknown column 'idx_pipeline_version_uuid_name' in 'where clause | {
"login": "uhhc",
"id": 3928357,
"node_id": "MDQ6VXNlcjM5MjgzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3928357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uhhc",
"html_url": "https://github.com/uhhc",
"followers_url": "https://api.github.com/users/uhhc/followers",
"following_url": "https://api.github.com/users/uhhc/following{/other_user}",
"gists_url": "https://api.github.com/users/uhhc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uhhc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uhhc/subscriptions",
"organizations_url": "https://api.github.com/users/uhhc/orgs",
"repos_url": "https://api.github.com/users/uhhc/repos",
"events_url": "https://api.github.com/users/uhhc/events{/privacy}",
"received_events_url": "https://api.github.com/users/uhhc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"@uhhc Are you willing to take over this issue? In addition, if we change it to single quote, will it affects users using other mysql mode?",
"> @uhhc Are you willing to take over this issue? In addition, if we change it to single quote, will it affects users using other mysql mode?\r\n\r\nOkay, I'll commit a PR in a moment. The single quote is the right and compatible way to enclose string literals in MySQL, it will not be affected by sql mode. For more info please see [here](https://dev.mysql.com/doc/refman/5.7/en/string-literals.html)."
] | 2021-02-10T05:39:04 | 2021-02-24T01:10:15 | 2021-02-24T01:10:15 | CONTRIBUTOR | null | ### What steps did you take:
I'm installing KFP in a new k8s cluster.
### What happened:
In pod console log, it shows error as follows:
```
Failed to query pipeline_version table's indices. Error: Error 1054: Unknown column 'idx_pipeline_version_uuid_name' in 'where clause'
```
But it successfully deployed in the past. So we checked the difference between them and found the sql_mode in MySQL are not the same. The newer set 'ANSI_QUOTES' in sql_mode value, and no errors occur when remove this value.
The [MySQL doc](https://dev.mysql.com/doc/refman/5.7/en/sql-mode.html#sqlmode_ansi_quotes) says:
> ANSI_QUOTES
> Treat " as an identifier quote character (like the ` quote character) and not as a string quote character. You can still use ` to quote identifiers with this mode enabled. With ANSI_QUOTES enabled, you cannot use double quotation marks to quote literal strings because they are interpreted as identifiers.
When I checked the [source code](https://github.com/kubeflow/pipelines/blob/master/backend/src/apiserver/client_manager.go#L271), I found you use **double quotation marks** in raw SQL clause:
```
show index from pipeline_versions where Key_name="idx_pipeline_version_uuid_name"
```
and it would be **not compatible with different sql_mode**.
Change the double quotation marks to single quotes will solve this problem.
### Environment:
<!-- Please fill in those that seem relevant. -->
How did you deploy Kubeflow Pipelines (KFP)?
Through my Helm/Charts
KFP version: `0.1.38` (This problem affects all the versions include the latest master)
KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp -->
### Anything else you would like to add:
/kind bug
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
/area backend
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5121/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5119 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5119/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5119/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5119/events | https://github.com/kubeflow/pipelines/issues/5119 | 805,171,142 | MDU6SXNzdWU4MDUxNzExNDI= | 5,119 | Invalid pipeline is generated if its name contains non-ascii chars only | {
"login": "yuku",
"id": 96157,
"node_id": "MDQ6VXNlcjk2MTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/96157?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuku",
"html_url": "https://github.com/yuku",
"followers_url": "https://api.github.com/users/yuku/followers",
"following_url": "https://api.github.com/users/yuku/following{/other_user}",
"gists_url": "https://api.github.com/users/yuku/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuku/subscriptions",
"organizations_url": "https://api.github.com/users/yuku/orgs",
"repos_url": "https://api.github.com/users/yuku/repos",
"events_url": "https://api.github.com/users/yuku/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuku/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"A workaround is to contain at least one ASCII char in the pipeline name."
] | 2021-02-10T05:02:54 | 2021-02-19T02:20:31 | 2021-02-19T02:20:31 | CONTRIBUTOR | null | ### What steps did you take:
[A clear and concise description of what the bug is.]
Suppose there is a pipeline with non-ascii name:
```py
# sample.py
from kfp import dsl
@dsl.pipeline(name="サンプル")
def sample():
...
```
Then compile it:
```bash
dsl-compile --py sample.py --out sample.yaml
```
The sample.yaml becomes:
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: Pipeline-
...
```
Then upload the pipeline and create run.
### What happened:
Failed because `Pipeline-` contains upper case char (`P`)
![image](https://user-images.githubusercontent.com/96157/107466593-acef3380-6ba7-11eb-81c1-4cd24d1f7337.png)
```json
{
"error":"Failed to create a new run.: InternalServerError: Failed to create a workflow for (): Workflow.argoproj.io \"Pipeline-45z6j\" is invalid: [metadata.generateName: Invalid value: \"Pipeline-\": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'), metadata.name: Invalid value: \"Pipeline-45z6j\": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')]",
"message":"Failed to create a new run.: InternalServerError: Failed to create a workflow for (): Workflow.argoproj.io \"Pipeline-45z6j\" is invalid: [metadata.generateName: Invalid value: \"Pipeline-\": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'), metadata.name: Invalid value: \"Pipeline-45z6j\": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')]",
"code":13,
"details":[
{
"@type":"type.googleapis.com/api.Error",
"error_message":"Internal Server Error",
"error_details":"Failed to create a new run.: InternalServerError: Failed to create a workflow for (): Workflow.argoproj.io \"Pipeline-45z6j\" is invalid: [metadata.generateName: Invalid value: \"Pipeline-\": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'), metadata.name: Invalid value: \"Pipeline-45z6j\": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')]"
}
]
}
```
### What did you expect to happen:
Successfully create run.
### Environment:
<!-- Please fill in those that seem relevant. -->
How did you deploy Kubeflow Pipelines (KFP)? AI Platform Pipelines
<!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). -->
KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. --> 1.0.4
KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp --> 1.3.0
### Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
The string `'Pipeline'` comes from
https://github.com/kubeflow/pipelines/blob/3d40bba9a3e14b170a2c05e41a0b0bee64d399b8/sdk/python/kfp/compiler/compiler.py#L692
/kind bug
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5119/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5118 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5118/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5118/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5118/events | https://github.com/kubeflow/pipelines/issues/5118 | 804,825,708 | MDU6SXNzdWU4MDQ4MjU3MDg= | 5,118 | [Discuss] Kubeflow Pipelines frontend development best practice. | {
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 930619516,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTY=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend",
"name": "area/frontend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1682717377,
"node_id": "MDU6TGFiZWwxNjgyNzE3Mzc3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/discussion",
"name": "kind/discussion",
"color": "ecfc15",
"default": false,
"description": ""
},
{
"id": 2152751095,
"node_id": "MDU6TGFiZWwyMTUyNzUxMDk1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/frozen",
"name": "lifecycle/frozen",
"color": "ededed",
"default": false,
"description": null
}
] | open | false | null | [] | null | [
"### Suggested Practice\r\n\r\nPrefer using `react-testing-library` in testing, avoid using `enzyme` when possible.\r\n\r\n### Reason\r\n\r\n> The more your tests resemble the way your software is used, the more confidence they can give you. \r\n\r\n`react-testing-library` allows you to focus on the user facing elements and interactions, while `enzyme` has tested implementation detail which might produce false positive/negative when refactoring. Although `enzyme` has helped with certain scenarios and it has been widely used by community, it can be verbose and fragile which doesn't serve the best interest: **production user's experience**. However, it is hard to flip the coin at once, so feel free to choose the best fit library depending on the scenario. https://kentcdodds.com/blog/testing-implementation-details\r\n\r\n### Example (Optional)\r\n\r\nhttps://github.com/kubeflow/pipelines/blob/master/frontend/src/pages/ExperimentList.test.tsx#L317-L324\r\n\r\nInstead of testing the props value, validating that the UI has rendered 2 runs for certain experiment is preferred.",
"For snapshot testing best practices, https://github.com/kubeflow/pipelines/pull/3166",
"Regarding the process, I'd suggest putting up individual PRs for the proposals, so that each of them can be addressed and discussed in its own context. It's hard to browse if we have several different topics in one github issue.",
"For one way data flow: https://github.com/kubeflow/pipelines/issues/5136",
"> Regarding the process, I'd suggest putting up individual PRs for the proposals, so that each of them can be addressed and discussed in its own context. It's hard to browse if we have several different topics in one github issue.\r\n\r\nThat makes total sense to have individual PRs for each proposal, thank you for bringing this up!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n",
"/lifecycle frozen"
] | 2021-02-09T18:58:27 | 2022-05-09T19:53:29 | null | COLLABORATOR | null | ## Topic
Inspired by https://github.com/kubeflow/pipelines/pull/5040#discussion_r569956232 : Determine the pattern and anti-pattern for frontend development on Kubeflow Pipelines. There are some existing development styles which are either out-of-date, or not recommended in the codebase. For example, enzyme vs react-testing-library, Ref vs Hook/State Update.
## Request
Everyone is welcome to add their opinions on best practice for frontend development. Please use the following format in your comment for better reviewing:
```
### Suggested Practice
// Write down the pattern or anti-pattern you suggest, prefer a concise statement with necessary link.
### Reason
// Write down the reason why a pattern is preferred, or why certain pattern should be avoided. You can be very verbose here.
### Example (Optional)
// Give an entry point for people to understand how to adopt this practice, or point out the existing file in codebase where we should improve or should follow.
```
## Action
After collecting feedback, frontend reviewers will make a call on whether to apply such practice, then update the [README](https://github.com/kubeflow/pipelines/blob/master/frontend/README.md).
/kind discussion
/area frontend
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5118/reactions",
"total_count": 3,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5118/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5117 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5117/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5117/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5117/events | https://github.com/kubeflow/pipelines/issues/5117 | 804,746,653 | MDU6SXNzdWU4MDQ3NDY2NTM= | 5,117 | What is the Meaning for KFP_FLAGS.DEPLOYMENT | {
"login": "kudla",
"id": 9119802,
"node_id": "MDQ6VXNlcjkxMTk4MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9119802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kudla",
"html_url": "https://github.com/kudla",
"followers_url": "https://api.github.com/users/kudla/followers",
"following_url": "https://api.github.com/users/kudla/following{/other_user}",
"gists_url": "https://api.github.com/users/kudla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kudla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kudla/subscriptions",
"organizations_url": "https://api.github.com/users/kudla/orgs",
"repos_url": "https://api.github.com/users/kudla/repos",
"events_url": "https://api.github.com/users/kudla/events{/privacy}",
"received_events_url": "https://api.github.com/users/kudla/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1682717392,
"node_id": "MDU6TGFiZWwxNjgyNzE3Mzky",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/question",
"name": "kind/question",
"color": "2515fc",
"default": false,
"description": ""
}
] | closed | false | {
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @kudla, it's used to distinguish the three installation modes in https://www.kubeflow.org/docs/pipelines/installation/overview/.\r\n\r\nAnd as you already understood, the major difference is multi user mode. It's not really used much for other ways.\r\n\r\n",
"Question answered, feel free to raise new issues or keep commenting if you have more questions."
] | 2021-02-09T17:14:24 | 2021-02-22T07:51:21 | 2021-02-22T07:51:20 | NONE | null | Please. Just can't find any info on `KFP_FLAGS.DEPLOYMENT` for pipelines implementation
According to
[front](https://github.com/kubeflow/pipelines/blob/1.1.2/frontend/src/index.tsx#L59)
```jsx
ReactDOM.render(
KFP_FLAGS.DEPLOYMENT === Deployments.KUBEFLOW ? (
<NamespaceContextProvider>{app}</NamespaceContextProvider>
) : (
// Uncomment the following for namespace switch during development.
// <NamespaceContext.Provider value='your-namespace'>{app}</NamespaceContext.Provider>
<NamespaceContext.Provider value={undefined}>{app}</NamespaceContext.Provider>
),
document.getElementById('root'),
);
```
and [api server](https://github.com/kubeflow/pipelines/blob/1.2.0/backend/src/apiserver/server/experiment_server.go#L136)
```js
if common.IsMultiUserMode() {
if refKey == nil || refKey.Type != common.Namespace {
return nil, util.NewInvalidInputError("Invalid resource references for experiment. ListExperiment requires filtering by namespace.")
}
namespace := refKey.ID
if len(namespace) == 0 {
return nil, util.NewInvalidInputError("Invalid resource references for experiment. Namespace is empty.")
}
resourceAttributes := &authorizationv1.ResourceAttributes{
Namespace: namespace,
Verb: common.RbacResourceVerbList,
}
err = s.canAccessExperiment(ctx, "", resourceAttributes)
if err != nil {
return nil, util.Wrap(err, "Failed to authorize with API resource references")
}
} else {
if refKey != nil && refKey.Type == common.Namespace && len(refKey.ID) > 0 {
return nil, util.NewInvalidInputError("In single-user mode, ListExperiment cannot filter by namespace.")
}
// In single user mode, apply filter with empty namespace for backward compatibile.
filterContext = &common.FilterContext{
ReferenceKey: &common.ReferenceKey{Type: common.Namespace, ID: ""},
}
}
```
codebases it seems `KFP_FLAGS.DEPLOYMENT` correlates tightly with `MULTIUSER` support. I.e. only `KUBEFLOW` deployment supports `MULTIUSER` mode. And vice versa for a Single User Mode the `KUBEFLOW` one should not be used.
But what is a wider scope for this `KFP_FLAGS.DEPLOYMENT`?
How this supposed to be really used for a Single User mode?
Or where can I read on this
Thanks | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5117/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5114 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5114/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5114/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5114/events | https://github.com/kubeflow/pipelines/issues/5114 | 804,127,657 | MDU6SXNzdWU4MDQxMjc2NTc= | 5,114 | Active/Archive management on Runs for Archived Experiment | {
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 930476737,
"node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted",
"name": "help wanted",
"color": "db1203",
"default": true,
"description": "The community is welcome to contribute."
},
{
"id": 930619516,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTY=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend",
"name": "area/frontend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1682717377,
"node_id": "MDU6TGFiZWwxNjgyNzE3Mzc3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/discussion",
"name": "kind/discussion",
"color": "ecfc15",
"default": false,
"description": ""
},
{
"id": 2152751095,
"node_id": "MDU6TGFiZWwyMTUyNzUxMDk1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/frozen",
"name": "lifecycle/frozen",
"color": "ededed",
"default": false,
"description": null
},
{
"id": 2186355346,
"node_id": "MDU6TGFiZWwyMTg2MzU1MzQ2",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/good%20first%20issue",
"name": "good first issue",
"color": "fef2c0",
"default": true,
"description": ""
}
] | open | false | null | [] | null | [
"I think the question is: \"Should we allow **Active** Runs in **Archived** Experiments\"?\r\n\r\nIf the answer was no, I can think of the following implications:\r\n\r\n- Archiving an Experiment would imply archiving of ALL the Runs belonging to the Experiment. We could prompt users of course, saying that the Experiment archival will have this effect.\r\n- Restoring an Experiment will restore ALL Runs belonging to the Experiment.\r\n\r\nIt is natural to me thinking about Experiments as direct parents (containers) of Runs, thus if I see an archived Experiment I would expect all of its runs to be Archived as well.\r\n\r\nIt's clear that KFP follows a more loosely coupled approach. @Bobgy Was this a specific decision, taken at some point in time, grounded on some specific rationale? Maybe there is some old issue where this had been discussed.",
"There were no formal discussion when it was implemented, and we sort of took the shortest path. Feel free to argue about whatever option makes the most sense to you.",
"One weird case would be:\n1. Archive some runs, but not all\n2. Archive the experiment, so all runs are archived\n3. Restore the experiment, should all runs get brought back? What if archiving the experiment was a mistake?\n\nJust to mean this is sth we can discuss, we may not need to support this corner case so well.",
"I agree with @StefanoFioravanzo 's proposal and when experiment is archived, runs should fail to unarchiv",
"@Bobgy This is a fair point!\r\n\r\n> One weird case would be:\r\n> \r\n> 1. Archive some runs, but not all\r\n> 2. Archive the experiment, so all runs are archived\r\n> 3. Restore the experiment, should all runs get brought back? What if archiving the experiment was a mistake?\r\n> \r\n> Just to mean this is sth we can discuss, we may not need to support this corner case so well.\r\n\r\nI think the following approach addresses it:\r\n\r\n- Archive Experiment: All Runs belonging to the Experiment get archived as well (upon user confirmation)\r\n- UnArchive Experiment: All Runs REMAIN archived; if the user wants to unarchive them all then they can go to the Experiment Details page, Archived Runs tab, and perform a bulk restore",
"In this way we enforce this first implication\r\n\r\n**Archived Experiment -> Archived Runs**\r\n\r\nBut not the opposite, and it's completely the users' choice whether to restore Runs, all of them or just in part. And if they want to restore an Experiment and keep it empty of Active Runs, to start creating new ones, they can do so.",
"SGTM",
"> if the user wants to unarchive them all then they can go to the Experiment Details page, Archived Runs tab, and perform a bulk restore\n\nIn fact, the argument seems to apply reversely too, if all runs are restored by default, users can bulk archive them again.\n\nI think we should think about which operation is more common as user intent? I feel like restoring all runs when restoring the experiment would be more Intuitive for me. WDYT?",
"I think we should just make this configurable. When you restore an experiment we can ask: \"Do you want to restore all archived Runs as well?\" -> \"yes\"/\"no\". So we definitely cover all potential use cases. WDYT?",
"Agreed, that's a good point!",
"I agree with the statement that **Archived Experiment** should not contain **Active Runs**.\r\n\r\nIn terms of state management of archiving/restoring an Experiment, I think the original intent of **Archived Runs** is to avoid accidental deletion of active runs, because active runs cannot be deleted, it has be archived first. And the concept of **Archived Experiment** is to group a list of runs which are ready to be deleted. See https://github.com/kubeflow/pipelines/issues/3283#issuecomment-626094965.\r\n\r\nTherefore, I agree with the interactive approach to ask users whether to restore Runs while restoring Experiment. \r\n\r\nTo summarize the list of changes required as a result of this decision:\r\n\r\n1. frontend: Hide **Active runs** list in `ExperimentDetail.tsx` when Experiment is **Archived**. Because no run should be active if Experiment is archived.\r\n2. frontend: When restoring an experiment, popup should shows 3 options for user to choose: `Cancel`, `Restore Experiment` and `Restore Experiment and All Runs`, and highlight `Restore Experiment` because that is the default behavior. \r\n3. backend: Check whether a run is under archived experiment when [Unarchive API](https://www.kubeflow.org/docs/pipelines/reference/api/kubeflow-pipeline-api-spec/#operation--apis-v1beta1-runs--id-:unarchive-post) is called. If so, fail with error 400 (FAILED_PRECONDITION).\r\n\r\nFeel free to comment if I am missing anything or need adjustment. Thank you!",
"> I agree with the statement that **Archived Experiment** should not contain **Active Runs**.\n> \n> In terms of state management of archiving/restoring an Experiment, I think the original intent of **Archived Runs** is to avoid accidental deletion of active runs, because active runs cannot be deleted, it has be archived first. And the concept of **Archived Experiment** is to group a list of runs which are ready to be deleted. See https://github.com/kubeflow/pipelines/issues/3283#issuecomment-626094965.\n> \n> Therefore, I agree with the interactive approach to ask users whether to restore Runs while restoring Experiment. \n> \n> To summarize the list of changes required as a result of this decision:\n> \n> 1. frontend: Hide **Active runs** list in `ExperimentDetail.tsx` when Experiment is **Archived**. Because no run should be active if Experiment is archived.\n\nPersonal opinion, UI should be stable, when a button doesn't apply in a context, we can disable it and add a hover tooltip to tell why. Removing it might leave people confused -- where is the tab?\n\n> 2. frontend: When restoring an experiment, popup should shows 3 options for user to choose: `Cancel`, `Restore Experiment` and `Restore Experiment and All Runs`, and highlight `Restore Experiment` because that is the default behavior. \n\nNote, we need to implent this feature in backend too.\n\n> 3. backend: Check whether a run is under archived experiment when (Unarchive API)[https://www.kubeflow.org/docs/pipelines/reference/api/kubeflow-pipeline-api-spec/#operation--apis-v1beta1-runs--id-:unarchive-post] is called. If so, fail with error 400 (FAILED_PRECONDITION).\n> \n> Feel free to comment if I am missing anything or need adjustment. Thank you!\n\n\nThanks! Per priority, sounds like 3 should be done first, 1 and 2 are good to haves. Users wouldn't be very distracted with current implementation too.\n",
"> Personal opinion, UI should be stable, when a button doesn't apply in a context, we can disable it and add a hover tooltip to tell why. Removing it might leave people confused -- where is the tab?\r\n\r\nYes! I completely agree on this!",
"I think\r\n> backend: Check whether a run is under archived experiment when (Unarchive API)[https://www.kubeflow.org/docs/pipelines/reference/api/kubeflow-pipeline-api-spec/#operation--apis-v1beta1-runs--id-:unarchive-post] is called. If so, fail with error 400 (FAILED_PRECONDITION).\r\n\r\ncan be a good first issue.\r\nWelcome contribution!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi, @Bobgy and @StefanoFioravanzo can I work on number: 3? \r\n> backend: Check whether a run is under archived experiment when Unarchive API is called. If so, fail with error 400 (FAILED_PRECONDITION).\r\n\r\nI will create a separate issue about this. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"/lifecycle frozen"
] | 2021-02-09T02:10:45 | 2022-04-21T00:23:16 | null | COLLABORATOR | null |
### What happened:
During the PR #5040, which allows user to see Active/Archive run lists of an experiment, regardless of whether that experiment is active or archived. There is discussion on whether a user can restore a `Run` from an `Archived Experiment`. See https://github.com/kubeflow/pipelines/pull/5040#issuecomment-773952554 for the initial question. And https://github.com/kubeflow/pipelines/pull/5040#issuecomment-774061658 for the argument. Currently this behavior is allowed on the latest code.
### Decision Making
The question: Can a user restore a Run from an Archived Experiment? If so, what is the use case? If not, why it is discouraged?
### Follow-up items
* If this behavior is allowed, should we update UI to call out the state of Experiment in ExperimentDetail.tsx file?
* If this behavior is not allowed, what is the error message we should show to users?
/kind discussion
/area frontend
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5114/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5111 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5111/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5111/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5111/events | https://github.com/kubeflow/pipelines/issues/5111 | 803,624,425 | MDU6SXNzdWU4MDM2MjQ0MjU= | 5,111 | Boolean parameters cannot be passed to components defined with the component.yaml specifications | {
"login": "Intellicode",
"id": 6794,
"node_id": "MDQ6VXNlcjY3OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6794?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Intellicode",
"html_url": "https://github.com/Intellicode",
"followers_url": "https://api.github.com/users/Intellicode/followers",
"following_url": "https://api.github.com/users/Intellicode/following{/other_user}",
"gists_url": "https://api.github.com/users/Intellicode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Intellicode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Intellicode/subscriptions",
"organizations_url": "https://api.github.com/users/Intellicode/orgs",
"repos_url": "https://api.github.com/users/Intellicode/repos",
"events_url": "https://api.github.com/users/Intellicode/events{/privacy}",
"received_events_url": "https://api.github.com/users/Intellicode/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | closed | false | null | [] | null | [
"I fixed this problem by using type \"Boolean\" instead of \"Bool\" in the component definition, although that goes directly against https://www.kubeflow.org/docs/pipelines/reference/component-spec/",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n",
"this is still a problem in kfp 1.8.14, and the documentation has still not been updated. Input type \"Bool\" is conspicuously the only input type that isn't fully spelled out, contrary to \"Integer\", \"String\", and \"Float\".\r\n\r\nPipelines run if input is \"Boolean\", but fail due to the `Argument type \"Boolean\" is incompatible with the input type \"Bool\"` error if input is \"Bool\", as the docs specify.\r\n\r\nCan we at least update the docs please?"
] | 2021-02-08T14:38:19 | 2022-12-16T20:07:59 | 2022-04-28T18:00:31 | CONTRIBUTOR | null | ### What steps did you take:
Boolean parameters cannot be passed to components defined with the component.yaml specifications:
Example pipeline:
```python
# test_case.py
from kfp import dsl, components
@dsl.pipeline(name="Pipeline", description="Pipeline")
def pipeline(
param: bool = False
) -> None:
some_op = components.load_component_from_text(
"""
name: Pipeline
inputs:
- name: param
type: Bool
implementation:
container:
image: some/container
args: [
/opt/conda/envs/py37/bin/python, script.py,
--job-name, {inputValue: param},
]
"""
)
some_op(param)
```
```bash
dsl-compile --py test_case.py --output test_case.yaml
```
### What happened:
The dsl-compile step throws an error:
```
kfp.dsl.types.InconsistentTypeException: Incompatible argument passed to the input "param" of component "Pipeline": Argument type "Boolean" is incompatible with the input type "Bool"
```
### What did you expect to happen:
The pipeline should compile succesfully
### Environment:
KFP version: N/A
KFP SDK version:
```
kfp 1.4.0
kfp-pipeline-spec 0.1.2
kfp-server-api 1.3.0
```
/kind bug
/area sdk
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5111/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5109 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5109/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5109/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5109/events | https://github.com/kubeflow/pipelines/issues/5109 | 802,745,682 | MDU6SXNzdWU4MDI3NDU2ODI= | 5,109 | How to specify the env values in ComponentSpec to build reusable components | {
"login": "ShilpaGopal",
"id": 13718648,
"node_id": "MDQ6VXNlcjEzNzE4NjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/13718648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShilpaGopal",
"html_url": "https://github.com/ShilpaGopal",
"followers_url": "https://api.github.com/users/ShilpaGopal/followers",
"following_url": "https://api.github.com/users/ShilpaGopal/following{/other_user}",
"gists_url": "https://api.github.com/users/ShilpaGopal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShilpaGopal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShilpaGopal/subscriptions",
"organizations_url": "https://api.github.com/users/ShilpaGopal/orgs",
"repos_url": "https://api.github.com/users/ShilpaGopal/repos",
"events_url": "https://api.github.com/users/ShilpaGopal/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShilpaGopal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 930619513,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTM=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/priority/p1",
"name": "priority/p1",
"color": "cb03cc",
"default": false,
"description": ""
},
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | closed | false | null | [] | null | [
"/cc @chensun @Ark-kun \r\n\r\nHi @ShilpaGopal, mounting secrets are probably never reusable, because they depend on your cluster setup. Therefore, you can only override env vars after initiating a component as an instance. See https://github.com/kubeflow/pipelines/blob/cadcac08bd6e2712ce62d7eb59ff0b3f2ee1bbe2/samples/core/visualization/tensorboard_minio.py#L168-L181",
"TODO: update the mentioned doc that env is not supported",
"@Bobgy Thanks for your input. Yes, I faced the issue were I had to maintain multiple reusable components per cluster in different environments. Overriding vars during initialization sounds like a good idea. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n",
"@Bobgy \r\nExcuse me for asking a question in a closed issue.\r\n\r\n> TODO: update the mentioned doc that env is not supported\r\n\r\nTo follow the comments above, is env no longer supported?\r\n\r\nI'm confused because there is a statement that env can be configured in [implementation](https://github.com/kubeflow/website/blob/master/content/en/docs/components/pipelines/v1/reference/component-spec.md#implementation ) "
] | 2021-02-06T17:14:31 | 2022-12-08T13:39:05 | 2022-03-03T04:05:34 | NONE | null | ### What steps did you take:
I am building a reusable component say alert-manager which will be used across every pipeline. The docker images used to build the component needs some environments variables to be set. As per the documentation (https://www.kubeflow.org/docs/pipelines/reference/component-spec/) , env can be set as maps. So I have added env params as shown below
```
name: dsw-alert
description: sends events to prometheus alert manager
inputs:
- {name: alert_name, type: String, description: 'alert name'}
- {name: severity, type: String, description: 'severity of the notification'}
implementation:
container:
image: dsw-alert-manager:1.0.0-10
args: [
"src/main.py",
"--alert_name", {inputValue: alert_name},
"--severity", {inputValue: severity}]
env:
- name: ALERT_MANAGER_HOST
value: 'http://alert-manager.fms-svc.svc.cluster.local'
```
### What happened:
When I am trying to load the component using load_component_from_file("data-loader-qa.yaml") dsl-compile of the piepline fails with below error (without the env section in reusable alert manager component, I am able to compile the pipeline component).
```
Error: ContainerSpec.from_dict(struct=OrderedDict([('image', 'dsw-alert-manager:1.0.0-10'), ('args', ['src/main.py', '--alert_name', OrderedDict([
('inputValue', 'alert_name')]), '--severity', OrderedDict([('inputValue', 'severity')])]), ('env', [OrderedDict([('name', 'ALERT_MANAGER_HOST'), ('value', 'http://alert-manager.fms-svc.sv
c.cluster.local')])])])) failed with exception:
Error: Structure "[OrderedDict([('name', 'ALERT_MANAGER_HOST'), ('value', 'http://alert-manager.fms-svc.svc.cluster.local')])]" is incompatible with type "typing.Mapping[str, str]" - it d
oes not have dict type.
Error: Structure "[OrderedDict([('name', 'ALERT_MANAGER_HOST'), ('value', 'http://alert-manager.fms-svc.svc.cluster.local')])]" is not None.
Error: Structure "[OrderedDict([('name', 'ALERT_MANAGER_HOST'), ('value', 'http://alert-manager.fms-svc.svc.cluster.local')])]" is incompatible with type "typing.Union[typing.Mapping[str,
str], NoneType]" - none of the types in Union are compatible.
Error: GraphImplementation.from_dict(struct=OrderedDict([('container', OrderedDict([('image', 'harbor-registry-mndc.uidai.gov.in/fms-qa/dsw-alert-manager:1.0.0-10'), ('args', ['src/main.p
y', '--alert_name', OrderedDict([('inputValue', 'alert_name')]), '--severity', OrderedDict([('inputValue', 'severity')])]), ('env', [OrderedDict([('name', 'ALERT_MANAGER_HOST'), ('value',
'http://alert-manager.fms-svc.svc.cluster.local')])])]))])) failed with exception:
__init__() got an unexpected keyword argument 'container'
Error: Structure "OrderedDict([('container', OrderedDict([('image', 'dsw-alert-manager:1.0.0-10'), ('args', ['src/main.py', '--alert_name', Ordere
dDict([('inputValue', 'alert_name')]), '--severity', OrderedDict([('inputValue', 'severity')])]), ('env', [OrderedDict([('name', 'ALERT_MANAGER_HOST'), ('value', 'http://alert-manager.fms
-svc.svc.cluster.local')])])]))])" is incompatible with type "typing.Union[kfp.components._structures.ContainerImplementation, kfp.components._structures.GraphImplementation, NoneType]" -
none of the types in Union are compatible.
```
### What did you expect to happen:
How to specify the environmental variables in reusable component spec? Any input on this is appreciated.
### Environment:
<!-- Please fill in those that seem relevant. -->
How did you deploy Kubeflow Pipelines (KFP)?
<!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). -->
Custom manifests deployment on on-prem kunernetes cluster with multi user support.
KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. --> 1.2.0
KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp -->1.1.1
/kind bug
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5109/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5106 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5106/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5106/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5106/events | https://github.com/kubeflow/pipelines/issues/5106 | 802,335,097 | MDU6SXNzdWU4MDIzMzUwOTc= | 5,106 | Collapsing For-loop components in the UI for large number of components | {
"login": "bakhtiary",
"id": 5190152,
"node_id": "MDQ6VXNlcjUxOTAxNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5190152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bakhtiary",
"html_url": "https://github.com/bakhtiary",
"followers_url": "https://api.github.com/users/bakhtiary/followers",
"following_url": "https://api.github.com/users/bakhtiary/following{/other_user}",
"gists_url": "https://api.github.com/users/bakhtiary/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bakhtiary/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bakhtiary/subscriptions",
"organizations_url": "https://api.github.com/users/bakhtiary/orgs",
"repos_url": "https://api.github.com/users/bakhtiary/repos",
"events_url": "https://api.github.com/users/bakhtiary/events{/privacy}",
"received_events_url": "https://api.github.com/users/bakhtiary/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 930619516,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTY=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend",
"name": "area/frontend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | closed | false | null | [] | null | [
"This is sth we are thinking about for KFP v2 UI",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-02-05T17:14:58 | 2022-04-18T17:28:00 | 2022-04-18T17:28:00 | NONE | null | ### What steps did you take:
I ran a pipeline with a large fanout
### What happened:
![image](https://user-images.githubusercontent.com/5190152/107065442-11437900-67dd-11eb-8d47-19131f64f738.png)
### What did you expect to happen:
It would be better if the items that are generated from the same statement would stay collapsed until clicked on. Or some other form of ui system that would allow a good view of what is going on.
### Environment:
kubeflow pipelines 1.3 stand alone installation
How did you deploy Kubeflow Pipelines (KFP)?
Stand alone on AWS
1.3
### Anything else you would like to add:
I understand this is a feature request. Is there a way for our company to put a bounty on this?
/kind feature
/area frontend
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5106/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5106/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5103 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5103/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5103/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5103/events | https://github.com/kubeflow/pipelines/issues/5103 | 801,725,126 | MDU6SXNzdWU4MDE3MjUxMjY= | 5,103 | [Bug] execution_caches table growing, no cached items ever deleted. | {
"login": "alexlatchford",
"id": 628146,
"node_id": "MDQ6VXNlcjYyODE0Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/628146?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexlatchford",
"html_url": "https://github.com/alexlatchford",
"followers_url": "https://api.github.com/users/alexlatchford/followers",
"following_url": "https://api.github.com/users/alexlatchford/following{/other_user}",
"gists_url": "https://api.github.com/users/alexlatchford/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexlatchford/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexlatchford/subscriptions",
"organizations_url": "https://api.github.com/users/alexlatchford/orgs",
"repos_url": "https://api.github.com/users/alexlatchford/repos",
"events_url": "https://api.github.com/users/alexlatchford/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexlatchford/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | closed | false | null | [] | null | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-02-04T23:56:33 | 2022-04-28T18:00:30 | 2022-04-28T18:00:30 | CONTRIBUTOR | null | ### What steps did you take:
We are users of KFP caching feature and just noticed the `execution_caches` table is up to 450k rows 😅
### What happened:
The cache still works but likely using more space than desired.
### What did you expect to happen:
The cache somehow compacts itself, likely via expiry.
### Environment:
How did you deploy Kubeflow Pipelines (KFP)? Via the Kubeflow v1.1 manifests.
KFP version: v1.0.0
KFP SDK version: v1.0.4
### Anything else you would like to add:
Looks like there is a `DeleteExecutionCache` method, see [here](https://github.com/kubeflow/pipelines/blob/master/backend/src/cache/storage/execution_cache_store.go#L137) but I don't see an interface to it and don't see it used anywhere.
The original request was from a customer of ours about whether they could manually delete a cached item (perhaps via the UI or CLI) so I did a little digging and found the cache itself wasn't auto-expiring.
/kind bug
/area backend
cc @Ark-kun (at the request of @Bobgy from [this Slack thread](https://kubeflow.slack.com/archives/CE10KS9M4/p1612317365063400)). | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5103/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5101 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5101/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5101/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5101/events | https://github.com/kubeflow/pipelines/issues/5101 | 801,412,458 | MDU6SXNzdWU4MDE0MTI0NTg= | 5,101 | Installing KFP understand specific namespace causes controller-manager to raise many errors | {
"login": "amitripshtos",
"id": 10770124,
"node_id": "MDQ6VXNlcjEwNzcwMTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/10770124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amitripshtos",
"html_url": "https://github.com/amitripshtos",
"followers_url": "https://api.github.com/users/amitripshtos/followers",
"following_url": "https://api.github.com/users/amitripshtos/following{/other_user}",
"gists_url": "https://api.github.com/users/amitripshtos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amitripshtos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amitripshtos/subscriptions",
"organizations_url": "https://api.github.com/users/amitripshtos/orgs",
"repos_url": "https://api.github.com/users/amitripshtos/repos",
"events_url": "https://api.github.com/users/amitripshtos/events{/privacy}",
"received_events_url": "https://api.github.com/users/amitripshtos/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | open | false | {
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@amitripshtos sorry for the late reply, can you still see this problem?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 2021-02-04T15:54:35 | 2022-03-03T02:05:29 | null | NONE | null | ### What steps did you take:
Installed KFP standalone on namespace "dev" instead "kubeflow".
### What happened:
Everything works, but the controller-manager is spamming the following log:
```
github.com/kubernetes-sigs/application/vendor/sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to list *v1beta1.Application: applications.app.k8s.io is forbidden: User "system:serviceaccount:development:application" cannot list resource "applications" in API group "app.k8s.io" in the namespace "kubeflow"
```
### What did you expect to happen:
The code in the controller-manager should get the namespace from the NAMESPACE environment variable and not use the default "kubeflow" namespace.
### Environment:
GKE
How did you deploy Kubeflow Pipelines (KFP)?
Standalone
KFP version: 1.3.0
/kind bug
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5101/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5098 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5098/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5098/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5098/events | https://github.com/kubeflow/pipelines/issues/5098 | 801,095,043 | MDU6SXNzdWU4MDEwOTUwNDM= | 5,098 | TFX Trainer ModelRun output can't be visualized with TensorBoard | {
"login": "denis-angilella",
"id": 58222783,
"node_id": "MDQ6VXNlcjU4MjIyNzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/58222783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/denis-angilella",
"html_url": "https://github.com/denis-angilella",
"followers_url": "https://api.github.com/users/denis-angilella/followers",
"following_url": "https://api.github.com/users/denis-angilella/following{/other_user}",
"gists_url": "https://api.github.com/users/denis-angilella/gists{/gist_id}",
"starred_url": "https://api.github.com/users/denis-angilella/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/denis-angilella/subscriptions",
"organizations_url": "https://api.github.com/users/denis-angilella/orgs",
"repos_url": "https://api.github.com/users/denis-angilella/repos",
"events_url": "https://api.github.com/users/denis-angilella/events{/privacy}",
"received_events_url": "https://api.github.com/users/denis-angilella/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"Thank you for reporting. I think that this is related to https://github.com/tensorflow/tfx/commit/18055491d38604934a25d49c53d456b96da04c47 . This should be fixed with the upcoming TFX version(0.28.0) (currently in RC)."
] | 2021-02-04T09:25:52 | 2021-03-12T01:14:22 | 2021-03-12T01:14:22 | NONE | null | ### What steps did you take:
* Run a TFX pipeline with Trainer component until successful completion
* Go to Run Output
* The Trainer component has 2 outputs, model and model_run:
```
model
Type: Model
Artifact: model
Properties:
uri: gs://<bucket>/pipelines/<pipeline_name>/Trainer/model/25
id: 31
span: None
type_id: 20
type_name: Model
state: published
split_names: None
producer_component: Trainer
model_run
Type: ModelRun
Artifact: model_run
Properties:
uri: gs://<bucket>/pipelines/<pipeline_name>/Trainer/model_run/25
id: 32
span: None
type_id: 35
type_name: ModelRun
state: published
split_names: None
producer_component: Trainer
```
* In the Trainer component section, create a TensorBoard then open it
### What happened:
TensorBoard shows INACTIVE and message `No dashboards are active for the current data set.`.
By switching SCALARS, under Runs, there is a single run:
```
gs://<bucket>/pipelines/<pipeline_name>/Trainer/model/25
```
The gs://.../model/ dir doesn't contain TensorBoard events.
The gs://.../model_run/ which contains the events for TensorBoard should be used as `logdir` instead.
```
gs://<bucket>/pipelines/<pipeline_name>/Trainer/model_run/25/serving_model_dir/events.out.tfevents.1612255967.model-train-d6wxv-2443271500
gs://<bucket>/pipelines/<pipeline_name>/Trainer/model_run/25/serving_model_dir/eval_model-eval/events.out.tfevents.1612256425.model-train-d6wxv-2443271500
```
The raw Trainer `mlpipeline-ui-metadata` output (names replaced):
```
{"outputs": [{"storage": "inline", "source": "# Execution properties:\n**custom\\_config**: null\n\n**eval\\_args**: {\n \"num\\_steps\": 500\n}\n\n**module\\_file**: None\n\n**run\\_fn**: None\n\n**train\\_args**: {\n \"num\\_steps\": 1000\n}\n\n**trainer\\_fn**: model.train.train.trainer\\_fn\n\n**kfp\\_pod\\_name**: <pipeline_name>-d6wxv-2443271500\n\n# Inputs:\n## examples\n\n**Type**: Examples\n\n**Artifact: transformed\\_examples**\n\n**Properties**:\n\n**uri**: gs://<bucket>/pipelines/<pipeline_name>/Transform/transformed\\_examples/23\n\n**id**: 25\n\n**span**: 0\n\n**type_id**: 10\n\n**type_name**: Examples\n\n**state**: published\n\n**split_names**: [\"train\", \"eval\"]\n\n**producer_component**: Transform\n\n## schema\n\n**Type**: Schema\n\n**Artifact: schema**\n\n**Properties**:\n\n**uri**: gs://<bucket>/pipelines/<pipeline_name>/SchemaGen/schema/19\n\n**id**: 20\n\n**span**: None\n\n**type_id**: 14\n\n**type_name**: Schema\n\n**state**: published\n\n**split_names**: None\n\n**producer_component**: SchemaGen\n\n## transform\\_graph\n\n**Type**: TransformGraph\n\n**Artifact: transform\\_graph**\n\n**Properties**:\n\n**uri**: gs://<bucket>/pipelines/<pipeline_name>/Transform/transform\\_graph/23\n\n**id**: 24\n\n**span**: None\n\n**type_id**: 18\n\n**type_name**: TransformGraph\n\n**state**: published\n\n**split_names**: None\n\n**producer_component**: Transform\n\n\n\n# Outputs:\n## model\n\n**Type**: Model\n\n**Artifact: model**\n\n**Properties**:\n\n**uri**: gs://<bucket>/pipelines/<pipeline_name>/Trainer/model/25\n\n**id**: 31\n\n**span**: None\n\n**type_id**: 20\n\n**type_name**: Model\n\n**state**: published\n\n**split_names**: None\n\n**producer_component**: Trainer\n\n## model\\_run\n\n**Type**: ModelRun\n\n**Artifact: model\\_run**\n\n**Properties**:\n\n**uri**: gs://<bucket>/pipelines/<pipeline_name>/Trainer/model\\_run/25\n\n**id**: 32\n\n**span**: None\n\n**type_id**: 35\n\n**type_name**: ModelRun\n\n**state**: published\n\n**split_names**: None\n\n**producer_component**: Trainer\n\n", "type": "markdown"}, {"type": "tensorboard", "source": "gs://<bucket>/pipelines/<pipeline_name>/Trainer/model/25"}]}
```
### What did you expect to happen:
TensorBoard should show events from model_run log dir.
### Environment:
How did you deploy Kubeflow Pipelines (KFP)?
Google Cloud AI Platform Pipelines
KFP version: 1.0.4
KFP SDK version: 1.3.0
### Anything else you would like to add:
TensorBoard 2.0.0
/kind bug
/area frontend
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5098/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5095 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5095/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5095/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5095/events | https://github.com/kubeflow/pipelines/issues/5095 | 800,987,482 | MDU6SXNzdWU4MDA5ODc0ODI= | 5,095 | Add kube-api-qps/kube-api-burst for pipeline components | {
"login": "peng09",
"id": 9096299,
"node_id": "MDQ6VXNlcjkwOTYyOTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9096299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peng09",
"html_url": "https://github.com/peng09",
"followers_url": "https://api.github.com/users/peng09/followers",
"following_url": "https://api.github.com/users/peng09/following{/other_user}",
"gists_url": "https://api.github.com/users/peng09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peng09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peng09/subscriptions",
"organizations_url": "https://api.github.com/users/peng09/orgs",
"repos_url": "https://api.github.com/users/peng09/repos",
"events_url": "https://api.github.com/users/peng09/events{/privacy}",
"received_events_url": "https://api.github.com/users/peng09/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 930476737,
"node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted",
"name": "help wanted",
"color": "db1203",
"default": true,
"description": "The community is welcome to contribute."
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2189136330,
"node_id": "MDU6TGFiZWwyMTg5MTM2MzMw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/perf",
"name": "area/perf",
"color": "7dc5f2",
"default": false,
"description": ""
}
] | closed | false | {
"login": "xxxxiehf",
"id": 25131016,
"node_id": "MDQ6VXNlcjI1MTMxMDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/25131016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xxxxiehf",
"html_url": "https://github.com/xxxxiehf",
"followers_url": "https://api.github.com/users/xxxxiehf/followers",
"following_url": "https://api.github.com/users/xxxxiehf/following{/other_user}",
"gists_url": "https://api.github.com/users/xxxxiehf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xxxxiehf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xxxxiehf/subscriptions",
"organizations_url": "https://api.github.com/users/xxxxiehf/orgs",
"repos_url": "https://api.github.com/users/xxxxiehf/repos",
"events_url": "https://api.github.com/users/xxxxiehf/events{/privacy}",
"received_events_url": "https://api.github.com/users/xxxxiehf/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "xxxxiehf",
"id": 25131016,
"node_id": "MDQ6VXNlcjI1MTMxMDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/25131016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xxxxiehf",
"html_url": "https://github.com/xxxxiehf",
"followers_url": "https://api.github.com/users/xxxxiehf/followers",
"following_url": "https://api.github.com/users/xxxxiehf/following{/other_user}",
"gists_url": "https://api.github.com/users/xxxxiehf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xxxxiehf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xxxxiehf/subscriptions",
"organizations_url": "https://api.github.com/users/xxxxiehf/orgs",
"repos_url": "https://api.github.com/users/xxxxiehf/repos",
"events_url": "https://api.github.com/users/xxxxiehf/events{/privacy}",
"received_events_url": "https://api.github.com/users/xxxxiehf/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"/assign",
"/area backend",
"/area perf",
"@Jeffwan @XXXXIEHF Welcome contribution on this!"
] | 2021-02-04T06:47:20 | 2021-03-16T18:14:15 | 2021-03-16T18:14:15 | NONE | null | In our internal clusters with KFP v1.0.0 release, we have over 20K workflow CRD in K8S Cluster.
The pipeline API Server/Cache Server has significant performance downgrade because of K8S client-go default QPS/Burst configuration. We can see logs from client-go below (the default QPS/Burst is 5/10 if unspecified.):
```
Throttling request took xxxxx, request: xxxxxxxx
```
In such scenarios, I think we need add two more parameters (like kube-api-qps/kube-api-burst) for all backend components using client-go. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5095/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5094 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5094/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5094/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5094/events | https://github.com/kubeflow/pipelines/issues/5094 | 800,987,448 | MDU6SXNzdWU4MDA5ODc0NDg= | 5,094 | Support `jobID` macro | {
"login": "wenmin-wu",
"id": 9409333,
"node_id": "MDQ6VXNlcjk0MDkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9409333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wenmin-wu",
"html_url": "https://github.com/wenmin-wu",
"followers_url": "https://api.github.com/users/wenmin-wu/followers",
"following_url": "https://api.github.com/users/wenmin-wu/following{/other_user}",
"gists_url": "https://api.github.com/users/wenmin-wu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wenmin-wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wenmin-wu/subscriptions",
"organizations_url": "https://api.github.com/users/wenmin-wu/orgs",
"repos_url": "https://api.github.com/users/wenmin-wu/repos",
"events_url": "https://api.github.com/users/wenmin-wu/events{/privacy}",
"received_events_url": "https://api.github.com/users/wenmin-wu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-02-04T06:47:17 | 2021-03-12T12:16:24 | 2021-03-12T12:16:24 | NONE | null | The `jobID` means `aba884eb-edcd-4828-aa18-887b4ab4abd2` in the following case, it's different from the `{{workflow.uid}}`.
`http://pipeline.xxx.ingress.stg.xxx.com/#/runs/details/aba884eb-edcd-4828-aa18-887b4ab4abd2`
It's very handy for error tracking. e.g. we can put this macro in `ExitHandler` to send the owner the status together with the job URL for faster accessing. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5094/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5089 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5089/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5089/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5089/events | https://github.com/kubeflow/pipelines/issues/5089 | 800,823,448 | MDU6SXNzdWU4MDA4MjM0NDg= | 5,089 | The KFP preloaded XGboost sample is broken and out-dated. | {
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260031624,
"node_id": "MDU6TGFiZWwxMjYwMDMxNjI0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/samples",
"name": "area/samples",
"color": "d2b48c",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"> Util we have the sample working, I propose we remove it from the KFP preloaded pipelines and sample-tests.\r\n\r\nCompletely agree with this!\r\nThe sample is a nice to have, although it's more important get KFP releases healthy and running.",
"While the issue is mitigated by temporarily removing the sample from preloads and tests, we may still want to rewrite the XGBoost Spark-Dataproc sample using the latest XGBoost library. Keep this issue open for tracking."
] | 2021-02-04T00:36:45 | 2021-02-12T01:22:57 | 2021-02-12T01:22:57 | COLLABORATOR | null | **TL;DR: The preload XGBoost sample is currently broken.
Proposing we remove this sample from KFP preload and from sample test until we got a chance to refresh the sample.**
------------
The direct cause was that it used the Dataproc 1.2 image which is based on Python 2.7, and pip 21.0 dropped support for Python 2.7.
The symptom is that `dataproc_create_cluster` fails on initialization.
![image](https://user-images.githubusercontent.com/2043310/106823995-bbcd6780-6636-11eb-8169-92d7a338e048.png)
and the specific error is mentioned [here](https://github.com/kubeflow/pipelines/issues/5007#issuecomment-769637030).
https://github.com/kubeflow/pipelines/pull/5062 made an attempted fix by upgrading to Dataproc 1.5 image. It fixed the Dataproc cluster creation issue, but we hit [an error](https://github.com/kubeflow/pipelines/issues/5007#issuecomment-770182313) later at the Trainer step.
We were advised that newer versions of Dataproc images likely don't have XGBoost library preinstalled, as there's now [an initialization action that goes through extra steps to install XGBoost libraries](https://github.com/GoogleCloudDataproc/initialization-actions/tree/master/rapids).
Following that route, I tried installing the default XGBoost version using the rapids script, then hit the error as follows:
```
21/02/03 18:34:20 INFO org.spark_project.jetty.util.log: Logging initialized @3037ms
21/02/03 18:34:20 INFO org.spark_project.jetty.server.Server: jetty-9.3.z-SNAPSHOT, build timestamp: unknown, git hash: unknown
21/02/03 18:34:20 INFO org.spark_project.jetty.server.Server: Started @3169ms
21/02/03 18:34:20 INFO org.spark_project.jetty.server.AbstractConnector: Started ServerConnector@4159e81b{HTTP/1.1,[http/1.1]}{0.0.0.0:37489}
21/02/03 18:34:20 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at xgb-bdd8f29b-fb13-4ec2-abcf-38b3699e7ca3-m/10.128.0.101:8032
21/02/03 18:34:21 INFO org.apache.hadoop.yarn.client.AHSProxy: Connecting to Application History server at xgb-bdd8f29b-fb13-4ec2-abcf-38b3699e7ca3-m/10.128.0.101:10200
21/02/03 18:34:21 INFO org.apache.hadoop.conf.Configuration: resource-types.xml not found
21/02/03 18:34:21 INFO org.apache.hadoop.yarn.util.resource.ResourceUtils: Unable to find 'resource-types.xml'.
21/02/03 18:34:21 INFO org.apache.hadoop.yarn.util.resource.ResourceUtils: Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE
21/02/03 18:34:21 INFO org.apache.hadoop.yarn.util.resource.ResourceUtils: Adding resource type - name = vcores, units = , type = COUNTABLE
21/02/03 18:34:23 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: Submitted application application_1612377093662_0003
21/02/03 18:34:30 INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat: Total input files to process : 1
21/02/03 18:34:30 INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat: Total input files to process : 1
21/02/03 18:34:30 INFO org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 1, size left: 0
21/02/03 18:34:36 INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat: Total input files to process : 1
21/02/03 18:34:36 INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat: Total input files to process : 1
21/02/03 18:34:36 INFO org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 1, size left: 0
Exception in thread "main" java.lang.NoSuchMethodError: ml.dmlc.xgboost4j.scala.spark.XGBoost$.trainWithDataFrame$default$5()Lml/dmlc/xgboost4j/scala/ObjectiveTrait;
at ml.dmlc.xgboost4j.scala.example.spark.XGBoostTrainer$.main(XGBoostTrainer.scala:120)
at ml.dmlc.xgboost4j.scala.example.spark.XGBoostTrainer.main(XGBoostTrainer.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
21/02/03 18:34:39 INFO org.spark_project.jetty.server.AbstractConnector: Stopped Spark@4159e81b{HTTP/1.1,[http/1.1]}{0.0.0.0:0}
Job output is complete
```
I then realized that the sample is based on the code from the deprecated component path, which was deleted by https://github.com/kubeflow/pipelines/pull/5045.
Specifically, the not found method from the above error was used here:
https://github.com/kubeflow/pipelines/blob/32ce8d8f90bfc8f89a2a3c347ad906f99ba776a8/components/deprecated/dataproc/train/src/XGBoostTrainer.scala#L121
And `trainWithDataFrame` only [exists in XGBoost 0.72](https://xgboost.readthedocs.io/en/release_0.72/jvm/scaladocs/xgboost4j-spark/index.html#ml.dmlc.xgboost4j.scala.spark.XGBoost$@trainWithDataFrame(trainingData:org.apache.spark.sql.Dataset[_],params:Map[String,Any],round:Int,nWorkers:Int,obj:ml.dmlc.xgboost4j.scala.ObjectiveTrait,eval:ml.dmlc.xgboost4j.scala.EvalTrait,useExternalMemory:Boolean,missing:Float,featureCol:String,labelCol:String):ml.dmlc.xgboost4j.scala.spark.XGBoostModel), but not seen from any versions beyond.
XGBoost 0.72 is too old and not even available from https://repo1.maven.org/maven2/com/nvidia/, which is [used by rapids to download XGBoost](https://github.com/GoogleCloudDataproc/initialization-actions/blob/8980d37d16ae580ad1d3eba7a40da59da52ff175/rapids/rapids.sh#L90).
At this point, I feel like we'd rather invest to rewrite the XGBoost sample using the latest XGBoost library than patching the existing one if we do think it's worth demoing running a XGBoost-on-Dataproc pipeline.
Util we have the sample working, I propose we remove it from the KFP preloaded pipelines and sample-tests. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5089/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5087 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5087/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5087/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5087/events | https://github.com/kubeflow/pipelines/issues/5087 | 800,515,033 | MDU6SXNzdWU4MDA1MTUwMzM= | 5,087 | kfp.compiler.Compiler.compile() failed with Bazel due to kfp.CHECK_TYPE not found | {
"login": "yzhangswingman",
"id": 29348277,
"node_id": "MDQ6VXNlcjI5MzQ4Mjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/29348277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yzhangswingman",
"html_url": "https://github.com/yzhangswingman",
"followers_url": "https://api.github.com/users/yzhangswingman/followers",
"following_url": "https://api.github.com/users/yzhangswingman/following{/other_user}",
"gists_url": "https://api.github.com/users/yzhangswingman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yzhangswingman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yzhangswingman/subscriptions",
"organizations_url": "https://api.github.com/users/yzhangswingman/orgs",
"repos_url": "https://api.github.com/users/yzhangswingman/repos",
"events_url": "https://api.github.com/users/yzhangswingman/events{/privacy}",
"received_events_url": "https://api.github.com/users/yzhangswingman/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] | open | false | {
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
},
{
"login": "numerology",
"id": 9604122,
"node_id": "MDQ6VXNlcjk2MDQxMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9604122?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/numerology",
"html_url": "https://github.com/numerology",
"followers_url": "https://api.github.com/users/numerology/followers",
"following_url": "https://api.github.com/users/numerology/following{/other_user}",
"gists_url": "https://api.github.com/users/numerology/gists{/gist_id}",
"starred_url": "https://api.github.com/users/numerology/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/numerology/subscriptions",
"organizations_url": "https://api.github.com/users/numerology/orgs",
"repos_url": "https://api.github.com/users/numerology/repos",
"events_url": "https://api.github.com/users/numerology/events{/privacy}",
"received_events_url": "https://api.github.com/users/numerology/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"/cc @chensun @neuromage ",
"The pkgutil-style namespace packages was introduced to turn the existing package into a namespace package, so that [kfp-pipeline-spec](https://github.com/kubeflow/pipelines/tree/master/api/v2alpha1/python/kfp/pipeline_spec) can share the same namespace `kfp` while it's published as a separate package.\r\n\r\nThis has been working for non-bazel use case, so not sure if this might be a bazel issue. \r\n\r\nLong term, we do plan to merge `kfp-pipeline-spec` back into `kfp`, by that time, we can revert the pkgutil-style namespace package. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@chensun is `kfp-pipeline-spec` merged into `kfp` in the recent version?",
"@chensun We're running into this issue trying to compile our KFP pipeline with Bazel. Have there been any updates or is there a workaround we might try?",
"This error comes from 2 things:\r\n- rules_python adds an [`__init__.py`](https://github.com/bazelbuild/rules_python/blob/888fa20176cdcaebb33f968dc7a8112fb678731d/python/pip_install/extract_wheels/lib/namespace_pkgs.py#L73-L85) for each module\r\n- `kfp` module depends on [kfp-pipeline-spec](https://pypi.org/project/kfp-pipeline-spec/#files), which has a folder `kfp`\r\n\r\nRules python will make that `kfp` folder from `kfp-pipeline-spec` a module and, after that, python will try to load 'TYPE_CHECK' from there and everything fails.\r\n\r\nAs a workaround you need to delete the generated `__init__.py` file from bazel path.\r\n\r\nSomething like this before loading the kfp module:\r\n\r\n```py\r\nimport sys\r\n\r\nfor module_path in sys.path:\r\n if module_path.endswith(\"nameused_kfp_pipeline_spec\"):\r\n os.remove(f\"{module_path}/kfp/__init__.py\")\r\n```\r\n\r\nreplace the path with whatever you have named your pip_parse/pip_install call in WORKSPACE.\r\n\r\nOr open an issue with rules_python for a proper fix",
"Given that the original goal for adopting namespace package style is to ensure the protobuf generated python modules can be accessed under the same kfp namespace, would it be possible to bundle them at release time? ",
"I also ran into the same issue. It shows up as `kfp.__version__` does not exist, as `kfp` points to the one under `kfp_pipeline_spec`.\r\n\r\nIt seems incorrect to me that `kfp` and `kfp_pipeline_spec` have different `__init__.py`:\r\n```\r\n$ cat pip_parsed_deps_kfp/site-packages/kfp/__init__.py \r\n# Copyright 2018-2022 The Kubeflow Authors\r\n#\r\n...\r\n# `kfp` is a namespace package.\r\n# https://packaging.python.org/guides/packaging-namespace-packages/#pkgutil-style-namespace-packages\r\n__path__ = __import__('pkgutil').extend_path(__path__, __name__)\r\n\r\n__version__ = '2.0.1'\r\n\r\nTYPE_CHECK = True\r\n\r\nfrom kfp import dsl\r\nfrom kfp.client import Client\r\n\r\n$ cat pip_parsed_deps_kfp_pipeline_spec/site-packages/kfp/__init__.py \r\n# __path__ manipulation added by bazelbuild/rules_python to support namespace pkgs.\r\n__path__ = __import__('pkgutil').extend_path(__path__, __name__)\r\n```\r\nAccording to[ python's documentation](https://packaging.python.org/en/latest/guides/packaging-namespace-packages/#pkgutil-style-namespace-packages),\r\n\r\n> Every distribution that uses the namespace package must include an identical __init__.py. If any distribution does not, it will cause the namespace logic to fail and the other sub-packages will not be importable. Any additional code in __init__.py will be inaccessible.\r\n\r\n\r\nA fix would be to remove the extra imports etc from `kfp`'s `__init__.py`, or add exactly the same ones to `kfp_pipeline_spec`'s. It would be hard to have bazel add the same `__init__.py` between the two packages.\r\n\r\n@cristifalcas Does your work-around work? It seems to me removing `kfp_pipeline_spec`'s `__init__.py` will prevent it from being recognized as a module and imported?\r\n\r\nMy work-around is to not use bazel, as `pip install` seems to know that `kfp` and `kfp_pipeline_spec` are the same package, put them under the same directory and does not have this issue.",
"@cnsgsz I don't believe we use this hack right now, but it was working correctly, so I still expect it to work?",
"> A fix would be to remove the extra imports etc from `kfp`'s `__init__.py`, or add exactly the same ones to `kfp_pipeline_spec`'s. It would be hard to have bazel add the same `__init__.py` between the two packages.\r\n\r\n@cnsgsz Removing the extra imports from `kfp`'s `__init__.py` is a breaking change affects all users, while those extra imports cannot be added to `kfp_pipeline_spec`'s, as they are not available to `kfp_pipeline_spec`, which is an independent sub-package.\r\n\r\nThe difference in `__init__.py` files is proven to be working just fine for the users, `bazel` is the only scenario it might not be compatible, though not verified. \r\n\r\nAt this moment, the KFP team has no plan to support `bazel` as as a building tool. In fact, `bazel` was deprecated from our repo from a long time ago (https://github.com/kubeflow/pipelines/issues/3250).\r\n\r\n@cristifalcas in the [above](https://github.com/kubeflow/pipelines/issues/5087#issuecomment-1096879641) mentioned a workaround and pointed out the underlying issue is likely [rules_python](https://github.com/bazelbuild/rules_python).",
"@cristifalcas : do you know what changed? why don't you need the hack anymore?\r\n\r\n@chensun:\r\nI understand the difficulty in making it backward compatible etc, though I'm still not convinced that the underlying issue is rules_python. Even without rules_python, namespace packages need to include identical init files, per the python doc I quoted above?\r\n\r\nWill it be possible to merge pipeline spec into kfp as suggested [earlier in the thread](https://github.com/kubeflow/pipelines/issues/5087#issuecomment-1040721497)?\r\n\r\n(I also wonder how pip install is able to put the two packages into 1 directory, thus avoiding the issue.)\r\n\r\nI use bazel to work with python, c++ etc, just as in tensorflow etc. I understand that google internally uses something similar to bazel. What build tool would you recommend in open source if not bazel? What does kfp use without bazel?",
"@cnsgsz , sorry, I was trying to say that the I expect the hack to still work, but we do not use it to be able to confirm this.\r\n\r\nWe did not move that project to bazel",
"> Even without rules_python, namespace packages need to include identical init files, per the python doc I quoted above?\r\n\r\nWe were aware of the deviation from what's suggested in the python doc when we made the namespace package change. My memory of why we possibly had to have this difference is hazy. But based on our research at that time and the experiment in the field, the difference doesn't seems to cause any issue until later this Bazel case which hasn't been proved to the result of the init.py difference yet.\r\nThat being said, @connor-mccarthy on our team is currently actively investigating this topic for a different scenario.\r\n\r\n> Will it be possible to merge pipeline spec into kfp as suggested https://github.com/kubeflow/pipelines/issues/5087#issuecomment-1040721497?\r\n\r\nLikely not an option. Pipeline spec was part of KFP package, and a copy was part of TFX package, the result was users cannot install KFP and TFX in the same environment otherwise the proto-generated Python code conflicts with each other. Due to that issue, we made pipeline spec a standalone package as a dependency for both KFP and TFX--TFX doesn't need to take the entire KFP package as a dependency.\r\n\r\n> I use bazel to work with python, c++ etc, just as in tensorflow etc. I understand that google internally uses something similar to bazel. What build tool would you recommend in open source if not bazel? What does kfp use without bazel?\r\n\r\nFor KFP, the SDK part is implicitly \"built\" through test and packaged using setuptools. The rests (backend and frontend) are built into containers using Dockerfile. I don't have any recommendation for build tools in open source. Maybe you can try this question in [the KFP Slack channel](https://kubeflow.slack.com/)?",
"I confirm that bazel added `pip_parsed_deps_kfp_pipeline_spec/site-packages/kfp/__init__.py` and removing it as @cristifalcas suggested works as a work-around.\r\n\r\nNonetheless, this work-around does not seem to conform to [namespace packages' documentation](https://packaging.python.org/en/latest/guides/packaging-namespace-packages/), as `kfp_pipeline_spec` does not have `__init__.py` while `kfp` has `__init__.py`?"
] | 2021-02-03T16:56:22 | 2023-08-11T19:50:58 | null | NONE | null | ### What steps did you take:
I use Bazel to compile the pipeline. It has been working for `kfp==1.0.0` but failed for `kfp>=1.1.2`.
### Environment:
python==3.8.5
bazel==3.2.0
KFP SDK version: `kfp==1.1.2` as well as `kfp==1.3.0`
### Anything else you would like to add:
Traceback from `kfp==1.1.2`
```python
.../pypi_kfp/kfp/compiler/compiler.py", line 917, in compile
type_check_old_value = kfp.TYPE_CHECK
AttributeError: module 'kfp' has no attribute 'TYPE_CHECK'
```
I suspect that the pkgutil style namespace packaging breaks the import (at least in Bazel)
https://github.com/kubeflow/pipelines/blob/1.1.2/sdk/python/kfp/__init__.py#L17
/kind bug
/area sdk
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5087/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5085 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5085/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5085/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5085/events | https://github.com/kubeflow/pipelines/issues/5085 | 800,347,894 | MDU6SXNzdWU4MDAzNDc4OTQ= | 5,085 | BrokenPipe for ml engine train component | {
"login": "dkajtoch",
"id": 32985207,
"node_id": "MDQ6VXNlcjMyOTg1MjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/32985207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dkajtoch",
"html_url": "https://github.com/dkajtoch",
"followers_url": "https://api.github.com/users/dkajtoch/followers",
"following_url": "https://api.github.com/users/dkajtoch/following{/other_user}",
"gists_url": "https://api.github.com/users/dkajtoch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dkajtoch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dkajtoch/subscriptions",
"organizations_url": "https://api.github.com/users/dkajtoch/orgs",
"repos_url": "https://api.github.com/users/dkajtoch/repos",
"events_url": "https://api.github.com/users/dkajtoch/events{/privacy}",
"received_events_url": "https://api.github.com/users/dkajtoch/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"/cc @chensun @neuromage "
] | 2021-02-03T13:51:22 | 2021-04-08T04:10:08 | 2021-04-05T10:00:30 | CONTRIBUTOR | null | ### What steps did you take:
ML engine train component throws BrokenPipe Error after around 30 minutes of continuous training. This is not a one-off situation but jobs keep failing.
### What happened:
### What did you expect to happen:
### Environment:
<!-- Please fill in those that seem relevant. -->
How did you deploy Kubeflow Pipelines (KFP)?
Kubeflow Pipelines was deployed using gcp "ai platform pipelines" tool.
Kubeflow Pipelines 1.0.4
Kubernetes: 1.17.14-gke.1600
```
ML_ENGINE_TRAIN_OP = comp.load_component_from_url(
"https://raw.githubusercontent.com/kubeflow/pipelines/1.3.0/components/gcp/ml_engine/train/component.yaml"
)
```
<!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). -->
KFP version: 1.0.4
KFP SDK version: 1.3.0
### Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind bug
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
![image](https://user-images.githubusercontent.com/32985207/106754349-29749600-662d-11eb-94db-c9e30ef639fe.png)
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5085/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5084 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5084/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5084/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5084/events | https://github.com/kubeflow/pipelines/issues/5084 | 800,098,586 | MDU6SXNzdWU4MDAwOTg1ODY= | 5,084 | [FR] Support namespaced pipelines from the UI | {
"login": "StefanoFioravanzo",
"id": 3354305,
"node_id": "MDQ6VXNlcjMzNTQzMDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3354305?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StefanoFioravanzo",
"html_url": "https://github.com/StefanoFioravanzo",
"followers_url": "https://api.github.com/users/StefanoFioravanzo/followers",
"following_url": "https://api.github.com/users/StefanoFioravanzo/following{/other_user}",
"gists_url": "https://api.github.com/users/StefanoFioravanzo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StefanoFioravanzo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StefanoFioravanzo/subscriptions",
"organizations_url": "https://api.github.com/users/StefanoFioravanzo/orgs",
"repos_url": "https://api.github.com/users/StefanoFioravanzo/repos",
"events_url": "https://api.github.com/users/StefanoFioravanzo/events{/privacy}",
"received_events_url": "https://api.github.com/users/StefanoFioravanzo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2152751095,
"node_id": "MDU6TGFiZWwyMTUyNzUxMDk1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/frozen",
"name": "lifecycle/frozen",
"color": "ededed",
"default": false,
"description": null
}
] | closed | false | null | [] | null | [
"Part of https://github.com/kubeflow/pipelines/issues/4197",
"A summary of previous discussion,\r\n\r\nDo we all agree on key criteria for this feature?\r\n* p0 being backward-compatible (e.g. if user stays in non-separation mode for pipelines, they should still be able to do what they want).\r\n* p0 allow users to upload to shared/non-shared\r\n* p0 allow users to list/select from shared/non-shared pipelines using a switch/tabs\r\n* p1 allow a configuration to disable shared pipelines from both API & UI",
"@StefanoFioravanzo How is your situation for this issue ? Do you work on this internally ?",
"Hey @toshi-k we will start working on this soon!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"/lifecycle frozen",
"Latest status for the backend API implementation is summarized in https://github.com/kubeflow/pipelines/issues/4197#issuecomment-900821642",
"Any update on this feature it would be really helpful for our use cases",
"@StefanoFioravanzo @kubeflow/arrikto is this still on your radar?",
"Hey @Bobgy thanks for the ping. We definitely still care about this. We don't have bandwidth right now, but we want to allocate some time during the first months of next year. ",
"Hello, and thanks for implementing this in backend! Completing this feature in frontend would be really useful for us. Along with the potential implementation of [namespaced artifacts](https://github.com/kubeflow/pipelines/issues/4649), it would provide complete isolation of resources. \r\n\r\nSince we work with different users, they would like to have complete control over the resources they submit (pipeline, runs, experiments, artifacts). \r\n\r\nAny chance this feature can be prioritized in the following months?",
"cc @jbottum , who has also raised the same request during Kubeflow Pipelines community meeting this week.",
"> cc @jbottum , who has also raised the same request during Kubeflow Pipelines community meeting this week.\r\n\r\nIs there any progress made regarding the frontend issue?"
] | 2021-02-03T08:30:03 | 2023-02-20T08:43:41 | 2023-02-20T08:43:41 | MEMBER | null | Given the backend support of namespaced pipelines (https://github.com/kubeflow/pipelines/pull/4835), we can use this issue to discuss how to bring this feature to the UI as well. We (Arrikto) will start working on this soon and provide updates here for discussion.
cc @Bobgy @yanniszark @elikatsis @maganaluis | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5084/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5084/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5073 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5073/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5073/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5073/events | https://github.com/kubeflow/pipelines/issues/5073 | 798,932,648 | MDU6SXNzdWU3OTg5MzI2NDg= | 5,073 | Dataflow Launch Python Sample.ipynb is Unreadable | {
"login": "harshit-deepsource",
"id": 75235231,
"node_id": "MDQ6VXNlcjc1MjM1MjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/75235231?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harshit-deepsource",
"html_url": "https://github.com/harshit-deepsource",
"followers_url": "https://api.github.com/users/harshit-deepsource/followers",
"following_url": "https://api.github.com/users/harshit-deepsource/following{/other_user}",
"gists_url": "https://api.github.com/users/harshit-deepsource/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harshit-deepsource/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harshit-deepsource/subscriptions",
"organizations_url": "https://api.github.com/users/harshit-deepsource/orgs",
"repos_url": "https://api.github.com/users/harshit-deepsource/repos",
"events_url": "https://api.github.com/users/harshit-deepsource/events{/privacy}",
"received_events_url": "https://api.github.com/users/harshit-deepsource/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | closed | false | null | [] | null | [
"/cc @chensun ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-02-02T05:07:04 | 2022-04-28T18:00:18 | 2022-04-28T18:00:18 | NONE | null | Unable to open components/gcp/dataflow/launch_python/sample.ipynb as the file is invalid because of invalid json. Please upload a working file. I tried opening it locally as well. But it's broken. Thanks.
https://github.com/kubeflow/pipelines/blob/master/components/gcp/dataflow/launch_python/sample.ipynb | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5073/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5068 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5068/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5068/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5068/events | https://github.com/kubeflow/pipelines/issues/5068 | 798,879,205 | MDU6SXNzdWU3OTg4NzkyMDU= | 5,068 | Dependency Dashboard | {
"login": "forking-renovate[bot]",
"id": 34481203,
"node_id": "MDM6Qm90MzQ0ODEyMDM=",
"avatar_url": "https://avatars.githubusercontent.com/in/7402?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forking-renovate%5Bbot%5D",
"html_url": "https://github.com/apps/forking-renovate",
"followers_url": "https://api.github.com/users/forking-renovate%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/forking-renovate%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/forking-renovate%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forking-renovate%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forking-renovate%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/forking-renovate%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/forking-renovate%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/forking-renovate%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/forking-renovate%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Part of https://github.com/kubeflow/pipelines/issues/4682\r\n\r\nThis is a dashboard provided by renovate-bot, maintainers can check the marks above to let the bot create certain update PRs.",
"@Bobgy Excuse me of mentioning.\r\n\r\n`robfig/cron` package now supports standard `CRON_TZ =` spec and this is important for our usecases.\r\n\r\nhttps://github.com/robfig/cron/tree/v3.0.1#upgrading-to-v3-june-2019\r\n\r\nWe should update the link to https://pkg.go.dev/github.com/robfig/cron#hdr-CRON_Expression_Format on frontend and SDK as it's for v1.2.0, but I think some other breaking changes can be justified with updated link and version update of Kubeflow.\r\n\r\nhttps://github.com/kubeflow/pipelines/blob/8dc170147d0cdf3d03e654563ede3fc65f0f08ae/frontend/src/components/Trigger.tsx#L377\r\n\r\n\r\nKubernetes now uses v3 too.\r\n\r\nhttps://github.com/kubernetes/kubernetes/blob/758ad0790ceae9e7be0554637c0fd623ff02e45f/go.mod#L76\r\n\r\nhttps://github.com/kubeflow/pipelines/blob/139acc88a620ae7a6974472d154e706c2396ced1/sdk/python/kfp/_client.py#L784\r\n\r\nIs it ok to upgrading by PR?"
] | 2021-02-02T02:55:43 | 2022-07-26T10:23:27 | null | NONE | null | This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.
## Pending Approval
These branches will be created by Renovate only once you click their checkbox below.
- [ ] <!-- approve-branch=renovate/gcr.io-inverting-proxy-agent-digest -->chore(deps): update gcr.io/inverting-proxy/agent digest to 762c500
- [ ] <!-- approve-branch=renovate/github.com-kubeflow-pipelines-api-digest -->fix(deps): update github.com/kubeflow/pipelines/api digest to b10440d
- [ ] <!-- approve-branch=renovate/github.com-kubeflow-pipelines-third_party-ml-metadata-digest -->fix(deps): update github.com/kubeflow/pipelines/third_party/ml-metadata digest to b10440d
- [ ] <!-- approve-branch=renovate/golang.org-x-net-digest -->fix(deps): update golang.org/x/net digest to 46097bf
- [ ] <!-- approve-branch=renovate/google.golang.org-genproto-digest -->fix(deps): update google.golang.org/genproto digest to 272f38e
- [ ] <!-- approve-branch=renovate/patch-docker-updates -->chore(deps): update docker patch updates (patch) (`golang`, `node`, `tensorflow/tensorflow`)
- [ ] <!-- approve-branch=renovate/patch-go-mod-updates -->fix(deps): update go.mod dependencies (patch) (`github.com/aws/aws-sdk-go`, `github.com/fsnotify/fsnotify`, `github.com/go-openapi/strfmt`, `github.com/google/go-cmp`, `github.com/jinzhu/gorm`, `github.com/lestrrat-go/strftime`, `github.com/prometheus/client_golang`, `github.com/stretchr/testify`, `k8s.io/api`, `k8s.io/apimachinery`, `k8s.io/client-go`, `k8s.io/code-generator`, `sigs.k8s.io/controller-runtime`)
- [ ] <!-- approve-branch=renovate/alpine-3.x -->chore(deps): update dependency alpine to v3.16
- [ ] <!-- approve-branch=renovate/golang-1.x -->chore(deps): update dependency golang
- [ ] <!-- approve-branch=renovate/python-3.x -->chore(deps): update dependency python to v3.10
- [ ] <!-- approve-branch=renovate/tensorflow-tensorflow-1.x -->chore(deps): update dependency tensorflow/tensorflow to v1.15.5
- [ ] <!-- approve-branch=renovate/go-1.x -->chore(deps): update module go to 1.18
- [ ] <!-- approve-branch=renovate/node-12.x -->chore(deps): update node.js to v12.22.12
- [ ] <!-- approve-branch=renovate/node-14.x -->chore(deps): update node.js to v14.20.0
- [ ] <!-- approve-branch=renovate/go-mod-updates -->fix(deps): update go.mod dependencies (minor) (`github.com/aws/aws-sdk-go`, `github.com/eapache/go-resiliency`, `github.com/go-openapi/runtime`, `github.com/go-openapi/swag`, `github.com/go-openapi/validate`, `github.com/google/cel-go`, `github.com/mattn/go-sqlite3`, `github.com/sirupsen/logrus`, `github.com/spf13/viper`, `github.com/stretchr/testify`, `gocloud.dev`, `google.golang.org/grpc`, `google.golang.org/grpc/cmd/protoc-gen-go-grpc`, `google.golang.org/protobuf`, `k8s.io/api`, `k8s.io/apimachinery`, `k8s.io/client-go`, `k8s.io/code-generator`, `k8s.io/kubernetes`, `sigs.k8s.io/controller-runtime`)
- [ ] <!-- approve-branch=renovate/npm-updates -->fix(deps): update npm dependencies (minor) (`@craco/craco`, `@google-cloud/storage`, `@kubernetes/client-node`, `@storybook/addon-actions`, `@storybook/addon-essentials`, `@storybook/addon-links`, `@storybook/node-logger`, `@storybook/react`, `@testing-library/dom`, `@testing-library/user-event`, `@types/d3`, `@types/d3-dsv`, `@types/express`, `@types/google-protobuf`, `@types/http-proxy-middleware`, `@types/markdown-to-jsx`, `@types/node-fetch`, `@types/react`, `@types/react-test-renderer`, `@types/react-virtualized`, `axios`, `browserslist`, `coveralls`, `d3`, `d3-dsv`, `enzyme`, `enzyme-to-json`, `express`, `google-protobuf`, `grpc-web`, `http-proxy-middleware`, `jest`, `markdown-to-jsx`, `react`, `react-dom`, `react-flow-renderer`, `react-query`, `react-virtualized`, `runtypes`, `snapshot-diff`, `tailwindcss`, `tar-stream`, `ts-jest`, `ts-proto`, `tsconfig-paths`, `typescript`, `typestyle`, `wait-port`, `webpack-bundle-analyzer`, `yaml`)
- [ ] <!-- approve-branch=renovate/storybook-preset-create-react-app-4.x -->chore(deps): update dependency @storybook/preset-create-react-app to v4
- [ ] <!-- approve-branch=renovate/testing-library-react-13.x -->chore(deps): update dependency @testing-library/react to v13
- [ ] <!-- approve-branch=renovate/testing-library-user-event-14.x -->chore(deps): update dependency @testing-library/user-event to v14
- [ ] <!-- approve-branch=renovate/react-18.x -->chore(deps): update dependency @types/react to v18
- [ ] <!-- approve-branch=renovate/react-dom-18.x -->chore(deps): update dependency @types/react-dom to v18
- [ ] <!-- approve-branch=renovate/react-router-dom-5.x -->chore(deps): update dependency @types/react-router-dom to v5
- [ ] <!-- approve-branch=renovate/react-test-renderer-18.x -->chore(deps): update dependency @types/react-test-renderer to v18
- [ ] <!-- approve-branch=renovate/tar-6.x -->chore(deps): update dependency @types/tar to v6
- [ ] <!-- approve-branch=renovate/tar-stream-2.x -->chore(deps): update dependency @types/tar-stream to v2
- [ ] <!-- approve-branch=renovate/debian-11.x -->chore(deps): update dependency debian to v11
- [ ] <!-- approve-branch=renovate/google-cloud-sdk-394.x -->chore(deps): update dependency google/cloud-sdk to v394
- [ ] <!-- approve-branch=renovate/prettier-2.x -->chore(deps): update dependency prettier to v2 (`prettier`, `@types/prettier`)
- [ ] <!-- approve-branch=renovate/standard-version-9.x -->chore(deps): update dependency standard-version to v9
- [ ] <!-- approve-branch=renovate/supertest-6.x -->chore(deps): update dependency supertest to v6
- [ ] <!-- approve-branch=renovate/tensorflow-tensorflow-2.x -->chore(deps): update dependency tensorflow/tensorflow
- [ ] <!-- approve-branch=renovate/ts-node-10.x -->chore(deps): update dependency ts-node to v10
- [ ] <!-- approve-branch=renovate/ts-node-dev-2.x -->chore(deps): update dependency ts-node-dev to v2
- [ ] <!-- approve-branch=renovate/tsconfig-paths-4.x -->chore(deps): update dependency tsconfig-paths to v4
- [ ] <!-- approve-branch=renovate/typescript-4.x -->chore(deps): update dependency typescript to v4
- [ ] <!-- approve-branch=renovate/ubuntu-22.x -->chore(deps): update dependency ubuntu to v22
- [ ] <!-- approve-branch=renovate/webpack-bundle-analyzer-4.x -->chore(deps): update dependency webpack-bundle-analyzer to v4
- [ ] <!-- approve-branch=renovate/major-jest-monorepo -->chore(deps): update jest monorepo to v28 (major) (`@types/jest`, `jest`, `ts-jest`)
- [ ] <!-- approve-branch=renovate/node-16.x -->chore(deps): update node.js to v16 (`node`, `@types/node`)
- [ ] <!-- approve-branch=renovate/node-18.x -->chore(deps): update node.js to v18
- [ ] <!-- approve-branch=renovate/major-googleapis-packages -->fix(deps): update dependency @google-cloud/storage to v6
- [ ] <!-- approve-branch=renovate/pako-2.x -->fix(deps): update dependency @types/pako to v2
- [ ] <!-- approve-branch=renovate/crypto-js-4.x -->fix(deps): update dependency crypto-js to v4 (`crypto-js`, `@types/crypto-js`)
- [ ] <!-- approve-branch=renovate/d3-7.x -->fix(deps): update dependency d3 to v7 (`d3`, `@types/d3`)
- [ ] <!-- approve-branch=renovate/d3-dsv-3.x -->fix(deps): update dependency d3-dsv to v3 (`d3-dsv`, `@types/d3-dsv`)
- [ ] <!-- approve-branch=renovate/http-proxy-middleware-2.x -->fix(deps): update dependency http-proxy-middleware to v2
- [ ] <!-- approve-branch=renovate/js-yaml-4.x -->fix(deps): update dependency js-yaml to v4 (`js-yaml`, `@types/js-yaml`)
- [ ] <!-- approve-branch=renovate/markdown-to-jsx-7.x -->fix(deps): update dependency markdown-to-jsx to v7 (`markdown-to-jsx`, `@types/markdown-to-jsx`)
- [ ] <!-- approve-branch=renovate/mocha-10.x -->fix(deps): update dependency mocha to v10
- [ ] <!-- approve-branch=renovate/node-fetch-3.x -->fix(deps): update dependency node-fetch to v3
- [ ] <!-- approve-branch=renovate/proto3-json-serializer-1.x -->fix(deps): update dependency proto3-json-serializer to v1
- [ ] <!-- approve-branch=renovate/re-resizable-6.x -->fix(deps): update dependency re-resizable to v6
- [ ] <!-- approve-branch=renovate/react-ace-10.x -->fix(deps): update dependency react-ace to v10
- [ ] <!-- approve-branch=renovate/react-dropzone-14.x -->fix(deps): update dependency react-dropzone to v14
- [ ] <!-- approve-branch=renovate/react-flow-renderer-10.x -->fix(deps): update dependency react-flow-renderer to v10
- [ ] <!-- approve-branch=renovate/major-react-router-monorepo -->fix(deps): update dependency react-router-dom to v6
- [ ] <!-- approve-branch=renovate/major-webdriverio-monorepo -->fix(deps): update dependency webdriverio to v7
- [ ] <!-- approve-branch=renovate/major-material-ui-monorepo -->fix(deps): update material-ui monorepo to v4 (major) (`@material-ui/core`, `@material-ui/icons`)
- [ ] <!-- approve-branch=renovate/github.com-google-addlicense-1.x -->fix(deps): update module github.com/google/addlicense to v1
- [ ] <!-- approve-branch=renovate/github.com-masterminds-squirrel-1.x -->fix(deps): update module github.com/masterminds/squirrel to v1
- [ ] <!-- approve-branch=renovate/github.com-vividcortex-mysqlerr-1.x -->fix(deps): update module github.com/vividcortex/mysqlerr to v1
- [ ] <!-- approve-branch=renovate/major-kubernetes-go -->fix(deps): update module k8s.io/client-go to v1
- [ ] <!-- approve-branch=renovate/k8s.io-kubernetes-1.x -->fix(deps): update module k8s.io/kubernetes to v1
- [ ] <!-- approve-branch=renovate/major-react-monorepo -->fix(deps): update react monorepo to v18 (major) (`react`, `react-dom`, `react-test-renderer`)
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/npm-protobufjs-vulnerability -->[fix(deps): update dependency protobufjs to v6.11.3 [security]](../pull/7957)
- [ ] <!-- rebase-branch=renovate/patch-npm-updates -->[fix(deps): update npm dependencies (patch)](../pull/8000) (`@material-ui/icons`, `@storybook/addon-actions`, `@storybook/addon-essentials`, `@storybook/addon-links`, `@storybook/node-logger`, `@storybook/react`, `@testing-library/dom`, `@testing-library/react`, `@types/crypto-js`, `@types/d3-dsv`, `@types/dagre`, `@types/enzyme`, `@types/enzyme-adapter-react-16`, `@types/express`, `@types/google-protobuf`, `@types/jest`, `@types/js-yaml`, `@types/lodash`, `@types/lodash.groupby`, `@types/markdown-to-jsx`, `@types/minio`, `@types/node`, `@types/node-fetch`, `@types/pako`, `@types/prettier`, `@types/react`, `@types/react-dom`, `@types/react-router-dom`, `@types/react-virtualized`, `@types/supertest`, `@types/tar`, `@types/tar-stream`, `autoprefixer`, `axios`, `browserslist`, `coveralls`, `dagre`, `enzyme-adapter-react-16`, `express`, `gunzip-maybe`, `http-proxy-middleware`, `immer`, `minio`, `node-fetch`, `postcss`, `proto3-json-serializer`, `react-ace`, `react-flow-renderer`, `react-query`, `react-scripts`, `react-textarea-autosize`, `react-virtualized`, `react-vis`, `runtypes`, `snapshot-diff`, `standard-version`, `tailwindcss`, `tar-stream`, `ts-proto`, `typescript`, `wait-port`, `wdio-selenium-standalone-service`, `yaml`)
## Detected dependencies
<details><summary>cloudbuild</summary>
<blockquote>
<details><summary>components/google-cloud/google_cloud_pipeline_components/container/cloudbuild.yaml</summary>
- `gcr.io/kaniko-project/executor latest`
</details>
<details><summary>samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/cloudbuild.yaml</summary>
- `gcr.io/cloud-builders/docker no version found`
- `gcr.io/cloud-builders/docker no version found`
- `gcr.io/cloud-builders/docker no version found`
- `gcr.io/cloud-builders/docker no version found`
- `gcr.io/cloud-builders/docker no version found`
- `gcr.io/cloud-builders/docker no version found`
- `gcr.io/cloud-builders/docker no version found`
- `gcr.io/cloud-builders/docker no version found`
- `gcr.io/cloud-builders/docker no version found`
- `gcr.io/cloud-builders/docker no version found`
- `python 3.7-slim`
- `gcr.io/cloud-builders/gsutil no version found`
- `gcr.io/cloud-builders/kubectl no version found`
</details>
<details><summary>samples/contrib/versioned-pipeline-ci-samples/mnist-ci-sample/cloudbuild.yaml</summary>
- `gcr.io/cloud-builders/docker no version found`
- `gcr.io/cloud-builders/docker no version found`
- `gcr.io/cloud-builders/docker no version found`
- `gcr.io/cloud-builders/docker no version found`
- `python 3.7-slim`
- `gcr.io/cloud-builders/gsutil no version found`
- `gcr.io/cloud-builders/kubectl no version found`
</details>
</blockquote>
</details>
<details><summary>docker-compose</summary>
<blockquote>
<details><summary>sdk/python/tests/compiler/testdata/compose.yaml</summary>
</details>
</blockquote>
</details>
<details><summary>dockerfile</summary>
<blockquote>
<details><summary>backend/Dockerfile</summary>
- `golang 1.17.6-stretch`
- `python 3.7`
- `debian stretch`
</details>
<details><summary>backend/Dockerfile.cacheserver</summary>
- `golang 1.17.6-alpine3.15`
- `alpine 3.8`
</details>
<details><summary>backend/Dockerfile.conformance</summary>
- `golang 1.17.6-alpine3.15`
- `alpine 3.8`
</details>
<details><summary>backend/Dockerfile.persistenceagent</summary>
- `golang 1.17.6-alpine3.15`
- `alpine 3.8`
</details>
<details><summary>backend/Dockerfile.scheduledworkflow</summary>
- `golang 1.17.6-alpine3.15`
- `alpine 3.8`
</details>
<details><summary>backend/Dockerfile.viewercontroller</summary>
- `golang 1.17.6-alpine3.15`
- `alpine no version found`
</details>
<details><summary>backend/Dockerfile.visualization</summary>
- `tensorflow/tensorflow 2.5.1`
</details>
<details><summary>backend/api/Dockerfile</summary>
- `golang 1.15.10`
</details>
<details><summary>backend/metadata_writer/Dockerfile</summary>
- `python 3.7`
</details>
<details><summary>backend/src/cache/deployer/Dockerfile</summary>
- `gcr.io/google.com/cloudsdktool/google-cloud-cli alpine`
</details>
<details><summary>backend/src/v2/test/Dockerfile</summary>
- `python 3.7-slim`
</details>
<details><summary>components/contrib/arena/docker/Dockerfile</summary>
- `golang 1.10-stretch`
- `python 3.7-alpine3.9`
</details>
<details><summary>components/contrib/presto/query/Dockerfile</summary>
- `python 3.7`
</details>
<details><summary>components/contrib/sample/keras/train_classifier/Dockerfile</summary>
- `tensorflow/tensorflow 1.12.0-py3`
</details>
<details><summary>components/gcp/container/Dockerfile</summary>
- `python 3.7`
</details>
<details><summary>components/google-cloud/google_cloud_pipeline_components/container/Dockerfile</summary>
- `gcr.io/google-appengine/python latest`
</details>
<details><summary>components/kserve/Dockerfile</summary>
- `python 3.6-slim`
</details>
<details><summary>components/kubeflow/deployer/Dockerfile</summary>
- `debian no version found`
</details>
<details><summary>components/kubeflow/dnntrainer/Dockerfile</summary>
- `undefined no version found`
</details>
<details><summary>components/kubeflow/katib-launcher/Dockerfile</summary>
- `python 3.6`
</details>
<details><summary>components/kubeflow/kfserving/Dockerfile</summary>
- `python 3.6-slim`
</details>
<details><summary>components/kubeflow/launcher/Dockerfile</summary>
- `python 3.6`
</details>
<details><summary>components/kubeflow/pytorch-launcher/Dockerfile</summary>
- `python 3.6`
</details>
<details><summary>components/local/base/Dockerfile</summary>
- `python 3.7`
</details>
<details><summary>components/local/confusion_matrix/Dockerfile</summary>
- `ml-pipeline-local-base no version found`
</details>
<details><summary>components/local/roc/Dockerfile</summary>
- `ml-pipeline-local-base no version found`
</details>
<details><summary>contrib/components/openvino/model_convert/containers/Dockerfile</summary>
- `ubuntu 16.04`
</details>
<details><summary>contrib/components/openvino/ovms-deployer/containers/Dockerfile</summary>
- `intelpython/intelpython3_core no version found`
</details>
<details><summary>contrib/components/openvino/predict/containers/Dockerfile</summary>
- `ubuntu 16.04`
- `ubuntu 16.04`
</details>
<details><summary>contrib/components/openvino/tf-slim/containers/Dockerfile</summary>
- `intelpython/intelpython3_core no version found`
- `intelpython/intelpython3_core no version found`
</details>
<details><summary>frontend/Dockerfile</summary>
- `node 14.18.2`
- `node 14.18.2-alpine`
</details>
<details><summary>manifests/gcp_marketplace/deployer/Dockerfile</summary>
- `gcr.io/cloud-marketplace-tools/k8s/deployer_helm/onbuild 0.11.3`
</details>
<details><summary>proxy/Dockerfile</summary>
- `gcr.io/inverting-proxy/agent sha256:c875588c1be53b1bc0c7183653347acf7887ab32a299a2e9b292bd6188a4e26b`
</details>
<details><summary>samples/contrib/image-captioning-gcp/src/Dockerfile</summary>
- `tensorflow/tensorflow 2.0.0b0-py3`
</details>
<details><summary>samples/contrib/nvidia-resnet/components/inference_server_launcher/Dockerfile</summary>
- `ubuntu 16.04`
</details>
<details><summary>samples/contrib/nvidia-resnet/components/preprocess/Dockerfile</summary>
- `nvcr.io/nvidia/tensorflow 19.03-py3`
</details>
<details><summary>samples/contrib/nvidia-resnet/components/train/Dockerfile</summary>
- `nvcr.io/nvidia/tensorflow 19.03-py3`
</details>
<details><summary>samples/contrib/nvidia-resnet/components/webapp/Dockerfile</summary>
- `base-trtis-client no version found`
</details>
<details><summary>samples/contrib/nvidia-resnet/components/webapp_launcher/Dockerfile</summary>
- `ubuntu 16.04`
</details>
<details><summary>samples/contrib/nvidia-resnet/pipeline/Dockerfile</summary>
- `python 3.6`
</details>
<details><summary>samples/contrib/pytorch-samples/Dockerfile</summary>
- `pytorch/pytorch latest`
</details>
<details><summary>samples/contrib/pytorch-samples/common/tensorboard/Dockerfile</summary>
- `gcr.io/deeplearning-platform-release/tf2-cpu.2-2 latest`
</details>
<details><summary>samples/contrib/versioned-pipeline-ci-samples/helloworld-ci-sample/helloworld/Dockerfile</summary>
- `python 3`
</details>
<details><summary>samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/download_dataset/Dockerfile</summary>
- `python 3.7`
</details>
<details><summary>samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/submit_result/Dockerfile</summary>
- `python 3.7`
</details>
<details><summary>samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/train_model/Dockerfile</summary>
- `python 3.7`
</details>
<details><summary>samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/visualize_html/Dockerfile</summary>
- `tensorflow/tensorflow 2.0.0-py3`
</details>
<details><summary>samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/visualize_table/Dockerfile</summary>
- `python 3.7`
</details>
<details><summary>samples/contrib/versioned-pipeline-ci-samples/mnist-ci-sample/tensorboard/Dockerfile</summary>
- `python 3.7-slim`
</details>
<details><summary>samples/contrib/versioned-pipeline-ci-samples/mnist-ci-sample/train/Dockerfile</summary>
- `tensorflow/tensorflow 2.0.0-py3`
</details>
<details><summary>test/api-integration-test/Dockerfile</summary>
- `golang 1.17`
</details>
<details><summary>test/frontend-integration-test/Dockerfile</summary>
- `gcr.io/ml-pipeline-test/selenium-standalone-chrome-gcloud-nodejs v20200210-0.2.2-30-g05865480-e3b0c4`
</details>
<details><summary>test/frontend-integration-test/selenium-standalone-chrome-gcloud-nodejs.Docker/Dockerfile</summary>
- `selenium/standalone-chrome 3.141.59-oxygen`
</details>
<details><summary>test/imagebuilder/Dockerfile</summary>
- `google/cloud-sdk 279.0.0`
</details>
<details><summary>test/images/Dockerfile</summary>
- `gcr.io/k8s-testimages/kubekins-e2e v20200204-8eefa86-master`
</details>
<details><summary>test/initialization-test/Dockerfile</summary>
- `golang 1.17`
</details>
<details><summary>test/release/Dockerfile.release</summary>
- `gcr.io/ml-pipeline-test/api-generator latest`
</details>
<details><summary>test/sample-test/Dockerfile</summary>
- `google/cloud-sdk 352.0.0`
</details>
<details><summary>tools/bazel_builder/Dockerfile</summary>
- `gcr.io/cloud-marketplace/google/rbe-ubuntu16-04 sha256:69c9f1652941d64a46f6f7358a44c1718f25caa5cb1ced4a58ccc5281cd183b5`
</details>
</blockquote>
</details>
<details><summary>gomod</summary>
<blockquote>
<details><summary>api/go.mod</summary>
- `go 1.16`
- `google.golang.org/genproto v0.0.0-20211026145609-4688e4c4e024@4688e4c4e024`
- `google.golang.org/protobuf v1.27.1`
</details>
<details><summary>go.mod</summary>
- `github.com/Masterminds/squirrel v0.0.0-20190107164353-fa735ea14f09@fa735ea14f09`
- `github.com/VividCortex/mysqlerr v0.0.0-20170204212430-6c6b55f8796f@6c6b55f8796f`
- `github.com/argoproj/argo-workflows/v3 v3.3.8`
- `github.com/aws/aws-sdk-go v1.42.50`
- `github.com/cenkalti/backoff v2.2.1+incompatible`
- `github.com/eapache/go-resiliency v1.2.0`
- `github.com/fsnotify/fsnotify v1.5.1`
- `github.com/ghodss/yaml v1.0.1-0.20190212211648-25d852aebe32`
- `github.com/go-openapi/errors v0.20.2`
- `github.com/go-openapi/runtime v0.21.1`
- `github.com/go-openapi/strfmt v0.21.1`
- `github.com/go-openapi/swag v0.19.15`
- `github.com/go-openapi/validate v0.20.3`
- `github.com/go-sql-driver/mysql v1.6.0`
- `github.com/golang/glog v1.0.0`
- `github.com/golang/protobuf v1.5.2`
- `github.com/google/addlicense v0.0.0-20200906110928-a0294312aa76@a0294312aa76`
- `github.com/google/cel-go v0.9.0`
- `github.com/google/go-cmp v0.5.7`
- `github.com/google/uuid v1.3.0`
- `github.com/gorilla/mux v1.8.0`
- `github.com/grpc-ecosystem/go-grpc-middleware v1.3.0`
- `github.com/grpc-ecosystem/grpc-gateway v1.16.0`
- `github.com/jinzhu/gorm v1.9.1`
- `github.com/kubeflow/pipelines/api v0.0.0-20220311022801-11635101d944@11635101d944`
- `github.com/kubeflow/pipelines/third_party/ml-metadata v0.0.0-20220118175555-e78ed557ddcb@e78ed557ddcb`
- `github.com/lestrrat-go/strftime v1.0.4`
- `github.com/mattn/go-sqlite3 v1.9.0`
- `github.com/minio/minio-go/v6 v6.0.57`
- `github.com/peterhellberg/duration v0.0.0-20191119133758-ec6baeebcd10@ec6baeebcd10`
- `github.com/pkg/errors v0.9.1`
- `github.com/prometheus/client_golang v1.12.1`
- `github.com/robfig/cron v1.2.0`
- `github.com/sirupsen/logrus v1.8.1`
- `github.com/spf13/viper v1.10.1`
- `github.com/stretchr/testify v1.7.0`
- `gocloud.dev v0.22.0`
- `golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd@cd36cc0744dd`
- `google.golang.org/genproto v0.0.0-20220310185008-1973136f34c6@1973136f34c6`
- `google.golang.org/grpc v1.44.0`
- `google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0`
- `google.golang.org/protobuf v1.27.1`
- `gopkg.in/yaml.v2 v2.4.0`
- `k8s.io/api v0.23.3`
- `k8s.io/apimachinery v0.23.3`
- `k8s.io/client-go v0.23.3`
- `k8s.io/code-generator v0.23.3`
- `k8s.io/kubernetes v0.17.9`
- `sigs.k8s.io/controller-runtime v0.11.1`
- `go 1.13`
</details>
<details><summary>test/tools/project-cleaner/go.mod</summary>
- `go 1.16`
</details>
</blockquote>
</details>
<details><summary>npm</summary>
<blockquote>
<details><summary>frontend/mock-backend/package.json</summary>
- `@types/express ^4.16.0`
- `express ^4.16.3`
</details>
<details><summary>frontend/package.json</summary>
- `@craco/craco ^6.2.0`
- `@material-ui/core ^3.9.1`
- `@material-ui/icons ^3.0.1`
- `@types/lodash.groupby ^4.6.6`
- `@types/pako ^1.0.3`
- `brace ^0.11.1`
- `d3 ^5.7.0`
- `d3-dsv ^1.0.10`
- `dagre ^0.8.2`
- `google-protobuf ^3.11.2`
- `grpc-web ^1.2.1`
- `http-proxy-middleware ^0.19.0`
- `immer ^9.0.6`
- `js-yaml ^3.14.1`
- `lodash ^4.17.21`
- `lodash.debounce ^4.0.8`
- `lodash.flatten ^4.4.0`
- `lodash.groupby ^4.6.0`
- `lodash.isfunction ^3.0.9`
- `markdown-to-jsx ^6.10.3`
- `pako ^2.0.4`
- `portable-fetch ^3.0.0`
- `proto3-json-serializer ^0.1.6`
- `protobufjs ~6.11.2`
- `re-resizable ^4.9.0`
- `react ^16.12.0`
- `react-ace ^7.0.2`
- `react-dom ^16.12.0`
- `react-dropzone ^5.1.0`
- `react-flow-renderer ^9.6.3`
- `react-query ^3.16.0`
- `react-router-dom ^4.3.1`
- `react-svg-line-chart ^2.0.2`
- `react-textarea-autosize ^8.3.3`
- `react-virtualized ^9.20.1`
- `react-vis ^1.11.2`
- `request ^2.88.2`
- `runtypes ^6.3.0`
- `ts-proto ^1.95.0`
- `typestyle ^2.0.4`
- `@google-cloud/storage ^4.1.3`
- `@storybook/addon-actions ^6.3.6`
- `@storybook/addon-essentials ^6.3.6`
- `@storybook/addon-links ^6.3.6`
- `@storybook/node-logger ^6.3.6`
- `@storybook/preset-create-react-app ^3.2.0`
- `@storybook/react ^6.3.6`
- `@testing-library/dom ^8.6.0`
- `@testing-library/react ^11.2.6`
- `@testing-library/user-event ^13.2.1`
- `@types/d3 ^5.0.0`
- `@types/d3-dsv ^1.0.33`
- `@types/dagre ^0.7.40`
- `@types/enzyme ^3.10.3`
- `@types/enzyme-adapter-react-16 ^1.0.5`
- `@types/express ^4.16.0`
- `@types/google-protobuf ^3.7.2`
- `@types/http-proxy-middleware ^0.17.5`
- `@types/jest ^27.5.1`
- `@types/js-yaml ^3.12.3`
- `@types/lodash >=4.14.117`
- `@types/markdown-to-jsx ^6.9.0`
- `@types/node ^10.17.60`
- `@types/prettier ^1.19.0`
- `@types/react ^16.9.22`
- `@types/react-dom ^16.9.5`
- `@types/react-router-dom ^4.3.1`
- `@types/react-test-renderer ^16.0.2`
- `@types/react-virtualized ^9.18.7`
- `autoprefixer ^10.4.1`
- `browserslist 4.16.5`
- `coveralls ^3.0.2`
- `enzyme ^3.10.0`
- `enzyme-adapter-react-16 ^1.15.1`
- `enzyme-to-json ^3.3.4`
- `fs 0.0.1-security`
- `jest-environment-jsdom-sixteen ^2.0.0`
- `license-checker ^25.0.1`
- `postcss ^8.4.5`
- `prettier 1.19.1`
- `react-router-test-context ^0.1.0`
- `react-scripts ^5.0.0`
- `react-test-renderer ^16.5.2`
- `snapshot-diff ^0.6.1`
- `swagger-ts-client ^0.9.6`
- `tailwindcss ^3.0.11`
- `ts-node ^7.0.1`
- `ts-node-dev ^1.1.8`
- `tsconfig-paths ^3.10.1`
- `tslint-config-prettier ^1.18.0`
- `typescript ^3.8.3`
- `webpack-bundle-analyzer ^3.6.1`
- `yaml ^2.0.0`
</details>
<details><summary>frontend/server/package.json</summary>
- `@google-cloud/storage ^2.5.0`
- `@kubernetes/client-node ^0.8.2`
- `axios >=0.21.1`
- `crypto-js ^3.1.8`
- `express ^4.16.3`
- `gunzip-maybe ^1.4.1`
- `http-proxy-middleware ^0.18.0`
- `lodash >=4.17.21`
- `minio ^7.0.0`
- `node-fetch ^2.6.1`
- `peek-stream ^1.1.3`
- `portable-fetch ^3.0.0`
- `tar-stream ^2.1.0`
- `@types/crypto-js ^3.1.43`
- `@types/express ^4.11.1`
- `@types/gunzip-maybe ^1.4.0`
- `@types/http-proxy-middleware ^0.19.3`
- `@types/jest ^24.9.1`
- `@types/minio ^7.0.3`
- `@types/node ^10.17.11`
- `@types/node-fetch ^2.1.2`
- `@types/supertest ^2.0.8`
- `@types/tar ^4.0.3`
- `@types/tar-stream ^1.6.1`
- `jest ^25.3.0`
- `prettier 1.19.1`
- `supertest ^4.0.2`
- `ts-jest ^25.2.1`
- `tslint ^5.20.1`
- `typescript ^3.6.4`
</details>
<details><summary>package.json</summary>
- `standard-version ^8.0.0`
</details>
<details><summary>samples/contrib/pytorch-samples/package.json</summary>
- `npm ^8.11.0`
- `yarn ^1.22.10`
</details>
<details><summary>test/frontend-integration-test/package.json</summary>
- `lodash >=4.17.21`
- `mocha ^5.2.0`
- `wait-port ^0.2.2`
- `wdio-junit-reporter ^0.4.4`
- `wdio-mocha-framework ^0.6.2`
- `wdio-selenium-standalone-service 0.0.10`
- `webdriverio ^4.14.1`
</details>
</blockquote>
</details>
<details><summary>nvm</summary>
<blockquote>
<details><summary>frontend/.nvmrc</summary>
- `node v12.14.1`
</details>
</blockquote>
</details>
<details><summary>pip_requirements</summary>
<blockquote>
<details><summary>backend/metadata_writer/requirements.txt</summary>
- `absl-py ==0.12.0`
- `attrs ==20.3.0`
- `cachetools ==5.0.0`
- `certifi ==2021.10.8`
- `charset-normalizer ==2.0.10`
- `google-auth ==2.4.1`
- `grpcio ==1.43.0`
- `idna ==3.3`
- `kubernetes ==10.1.0`
- `lru-dict ==1.1.7`
- `ml-metadata ==1.5.0`
- `oauthlib ==3.1.1`
- `protobuf ==3.19.3`
- `pyasn1-modules ==0.2.8`
- `pyasn1 ==0.4.8`
- `python-dateutil ==2.8.2`
- `pyyaml ==3.13`
- `requests-oauthlib ==1.3.0`
- `requests ==2.27.1`
- `rsa ==4.8`
- `six ==1.16.0`
- `urllib3 ==1.26.8`
- `websocket-client ==1.2.3`
</details>
<details><summary>backend/requirements.txt</summary>
- `absl-py ==0.11.0`
- `apache-beam ==2.34.0`
- `argcomplete ==1.12.3`
- `argon2-cffi-bindings ==21.2.0`
- `argon2-cffi ==21.3.0`
- `astunparse ==1.6.3`
- `attrs ==20.3.0`
- `avro-python3 ==1.9.2.1`
- `backcall ==0.2.0`
- `bleach ==4.1.0`
- `cached-property ==1.5.2`
- `cachetools ==4.2.4`
- `certifi ==2021.10.8`
- `cffi ==1.15.0`
- `charset-normalizer ==2.0.9`
- `clang ==5.0`
- `click ==7.1.2`
- `cloudpickle ==2.0.0`
- `crcmod ==1.7`
- `debugpy ==1.5.1`
- `decorator ==5.1.0`
- `defusedxml ==0.7.1`
- `deprecated ==1.2.13`
- `dill ==0.3.1.1`
- `docker ==4.4.4`
- `docopt ==0.6.2`
- `docstring-parser ==0.13`
- `entrypoints ==0.3`
- `fastavro ==1.4.7`
- `fasteners ==0.16.3`
- `fire ==0.4.0`
- `flatbuffers ==1.12`
- `future ==0.18.2`
- `gast ==0.4.0`
- `google-api-core ==1.31.5`
- `google-api-python-client ==1.12.8`
- `google-apitools ==0.5.31`
- `google-auth-httplib2 ==0.1.0`
- `google-auth-oauthlib ==0.4.6`
- `google-auth ==1.35.0`
- `google-cloud-aiplatform ==1.8.1`
- `google-cloud-bigquery-storage ==2.10.1`
- `google-cloud-bigquery ==2.31.0`
- `google-cloud-bigtable ==1.7.0`
- `google-cloud-core ==1.7.2`
- `google-cloud-datastore ==1.15.3`
- `google-cloud-dlp ==1.0.0`
- `google-cloud-language ==1.3.0`
- `google-cloud-pubsub ==1.7.0`
- `google-cloud-recommendations-ai ==0.2.0`
- `google-cloud-spanner ==1.19.1`
- `google-cloud-storage ==1.43.0`
- `google-cloud-videointelligence ==1.16.1`
- `google-cloud-vision ==1.0.0`
- `google-crc32c ==1.3.0`
- `google-pasta ==0.2.0`
- `google-resumable-media ==2.1.0`
- `googleapis-common-protos ==1.54.0`
- `grpc-google-iam-v1 ==0.12.3`
- `grpcio-gcp ==0.2.2`
- `grpcio ==1.43.0`
- `h5py ==3.1.0`
- `hdfs ==2.6.0`
- `httplib2 ==0.19.1`
- `idna ==3.3`
- `importlib-metadata ==4.10.0`
- `ipykernel ==6.6.0`
- `ipython-genutils ==0.2.0`
- `ipython ==7.30.1`
- `ipywidgets ==7.6.5`
- `jedi ==0.18.1`
- `jinja2 ==3.0.3`
- `joblib ==0.14.1`
- `jsonschema ==3.2.0`
- `jupyter-client ==7.1.0`
- `jupyter-core ==4.9.1`
- `jupyterlab-pygments ==0.1.2`
- `jupyterlab-widgets ==1.0.2`
- `keras-preprocessing ==1.1.2`
- `keras-tuner ==1.1.0`
- `keras ==2.6.0`
- `kfp-pipeline-spec ==0.1.13`
- `kfp-server-api ==1.7.1`
- `kfp ==1.8.10`
- `kt-legacy ==1.0.4`
- `kubernetes ==12.0.1`
- `libcst ==0.3.23`
- `markdown ==3.3.6`
- `markupsafe ==2.0.1`
- `matplotlib-inline ==0.1.3`
- `mistune ==0.8.4`
- `ml-metadata ==1.4.0`
- `ml-pipelines-sdk ==1.4.0`
- `mypy-extensions ==0.4.3`
- `nbclient ==0.5.9`
- `nbconvert ==6.3.0`
- `nbformat ==5.1.3`
- `nest-asyncio ==1.5.4`
- `notebook ==6.4.6`
- `numpy ==1.19.5`
- `oauth2client ==4.1.3`
- `oauthlib ==3.1.1`
- `opt-einsum ==3.3.0`
- `orjson ==3.6.5`
- `packaging ==20.9`
- `pandas ==1.3.5`
- `pandocfilters ==1.5.0`
- `parso ==0.8.3`
- `pexpect ==4.8.0`
- `pickleshare ==0.7.5`
- `portpicker ==1.5.0`
- `prometheus-client ==0.12.0`
- `prompt-toolkit ==3.0.24`
- `proto-plus ==1.19.8`
- `protobuf ==3.19.1`
- `psutil ==5.8.0`
- `ptyprocess ==0.7.0`
- `pyarrow ==5.0.0`
- `pyasn1-modules ==0.2.8`
- `pyasn1 ==0.4.8`
- `pycparser ==2.21`
- `pydantic ==1.8.2`
- `pydot ==1.4.2`
- `pygments ==2.10.0`
- `pymongo ==3.12.3`
- `pyparsing ==2.4.7`
- `pyrsistent ==0.18.0`
- `python-dateutil ==2.8.2`
- `pytz ==2021.3`
- `pyyaml ==5.4.1`
- `pyzmq ==22.3.0`
- `requests-oauthlib ==1.3.0`
- `requests-toolbelt ==0.9.1`
- `requests ==2.26.0`
- `rsa ==4.8`
- `scipy ==1.7.3`
- `send2trash ==1.8.0`
- `six ==1.15.0`
- `strip-hints ==0.1.10`
- `tabulate ==0.8.9`
- `tensorboard-data-server ==0.6.1`
- `tensorboard-plugin-wit ==1.8.0`
- `tensorboard ==2.6.0`
- `tensorflow-data-validation ==1.4.0`
- `tensorflow-estimator ==2.6.0`
- `tensorflow-hub ==0.12.0`
- `tensorflow-metadata ==1.4.0`
- `tensorflow-model-analysis ==0.35.0`
- `tensorflow-serving-api ==2.6.2`
- `tensorflow-transform ==1.4.0`
- `tensorflow ==2.6.2`
- `termcolor ==1.1.0`
- `terminado ==0.12.1`
- `testpath ==0.5.0`
- `tfx-bsl ==1.4.0`
- `tfx ==1.4.0`
- `tornado ==6.1`
- `traitlets ==5.1.1`
- `typer ==0.4.0`
- `typing-extensions ==3.7.4.3`
- `typing-inspect ==0.7.1`
- `uritemplate ==3.0.1`
- `urllib3 ==1.26.7`
- `wcwidth ==0.2.5`
- `webencodings ==0.5.1`
- `websocket-client ==1.2.3`
- `werkzeug ==2.0.2`
- `wheel ==0.37.1`
- `widgetsnbextension ==3.5.2`
- `wrapt ==1.12.1`
- `zipp ==3.6.0`
</details>
<details><summary>backend/src/apiserver/visualization/requirements.txt</summary>
- `absl-py ==0.12.0`
- `apache-beam ==2.34.0`
- `argon2-cffi-bindings ==21.2.0`
- `argon2-cffi ==21.3.0`
- `astunparse ==1.6.3`
- `attrs ==21.2.0`
- `avro-python3 ==1.9.2.1`
- `backcall ==0.2.0`
- `bleach ==4.1.0`
- `bokeh ==1.2.0`
- `cached-property ==1.5.2`
- `cachetools ==4.2.4`
- `certifi ==2021.10.8`
- `cffi ==1.15.0`
- `charset-normalizer ==2.0.9`
- `crcmod ==1.7`
- `dataclasses ==0.8`
- `decorator ==5.1.0`
- `defusedxml ==0.7.1`
- `dill ==0.3.1.1`
- `docopt ==0.6.2`
- `entrypoints ==0.3`
- `fastavro ==1.4.7`
- `fasteners ==0.16.3`
- `flatbuffers ==1.12`
- `future ==0.18.2`
- `gast ==0.4.0`
- `gcsfs ==0.2.3`
- `google-api-core ==1.31.5`
- `google-api-python-client ==1.7.12`
- `google-apitools ==0.5.31`
- `google-auth-httplib2 ==0.1.0`
- `google-auth-oauthlib ==0.4.6`
- `google-auth ==1.35.0`
- `google-cloud-bigquery-storage ==2.10.1`
- `google-cloud-bigquery ==2.20.0`
- `google-cloud-bigtable ==1.7.0`
- `google-cloud-core ==1.7.2`
- `google-cloud-datastore ==1.15.3`
- `google-cloud-dlp ==1.0.0`
- `google-cloud-language ==1.3.0`
- `google-cloud-pubsub ==1.7.0`
- `google-cloud-recommendations-ai ==0.2.0`
- `google-cloud-spanner ==1.19.1`
- `google-cloud-videointelligence ==1.16.1`
- `google-cloud-vision ==1.0.0`
- `google-crc32c ==1.3.0`
- `google-pasta ==0.2.0`
- `google-resumable-media ==1.3.3`
- `googleapis-common-protos ==1.54.0`
- `grpc-google-iam-v1 ==0.12.3`
- `grpcio-gcp ==0.2.2`
- `grpcio ==1.34.1`
- `h5py ==3.1.0`
- `hdfs ==2.6.0`
- `httplib2 ==0.19.1`
- `idna ==3.3`
- `importlib-metadata ==4.8.3`
- `ipykernel ==5.1.1`
- `ipython-genutils ==0.2.0`
- `ipython ==7.12.0`
- `ipywidgets ==7.6.5`
- `itables ==0.1.0`
- `jedi ==0.18.1`
- `jinja2 ==3.0.3`
- `joblib ==0.14.1`
- `jsonschema ==3.2.0`
- `jupyter-client ==5.3.5`
- `jupyter-core ==4.9.1`
- `jupyterlab-widgets ==1.0.2`
- `keras-nightly ==2.5.0.dev2021032900`
- `keras-preprocessing ==1.1.2`
- `libcst ==0.3.23`
- `markdown ==3.3.6`
- `markupsafe ==2.0.1`
- `mistune ==0.8.4`
- `mypy-extensions ==0.4.3`
- `nbconvert ==5.5.0`
- `nbformat ==4.4.0`
- `nest-asyncio ==1.5.4`
- `notebook ==6.4.6`
- `numpy ==1.19.5`
- `oauth2client ==4.1.3`
- `oauthlib ==3.1.1`
- `opt-einsum ==3.3.0`
- `orjson ==3.6.1`
- `packaging ==21.3`
- `pandas ==1.1.5`
- `pandocfilters ==1.5.0`
- `parso ==0.8.3`
- `pexpect ==4.8.0`
- `pickleshare ==0.7.5`
- `pillow ==8.4.0`
- `prometheus-client ==0.12.0`
- `prompt-toolkit ==3.0.24`
- `proto-plus ==1.19.8`
- `protobuf ==3.19.1`
- `ptyprocess ==0.7.0`
- `pyarrow ==2.0.0`
- `pyasn1-modules ==0.2.8`
- `pyasn1 ==0.4.8`
- `pycparser ==2.21`
- `pydot ==1.4.2`
- `pygments ==2.10.0`
- `pymongo ==3.12.3`
- `pyparsing ==2.4.7`
- `pyrsistent ==0.18.0`
- `python-dateutil ==2.8.2`
- `pytz ==2021.3`
- `pyyaml ==6.0`
- `pyzmq ==22.3.0`
- `requests-oauthlib ==1.3.0`
- `requests ==2.26.0`
- `rsa ==4.8`
- `scikit_learn ==0.21.2`
- `scipy ==1.5.4`
- `send2trash ==1.8.0`
- `six ==1.15.0`
- `tensorboard-data-server ==0.6.1`
- `tensorboard-plugin-wit ==1.8.0`
- `tensorboard ==2.7.0`
- `tensorflow-data-validation ==1.2.0`
- `tensorflow-estimator ==2.5.0`
- `tensorflow-metadata ==1.2.0`
- `tensorflow-model-analysis ==0.33.0`
- `tensorflow-serving-api ==2.5.1`
- `tensorflow ==2.5.1`
- `termcolor ==1.1.0`
- `terminado ==0.12.1`
- `testpath ==0.5.0`
- `tfx-bsl ==1.2.0`
- `tornado ==6.1`
- `traitlets ==4.3.3`
- `typing-extensions ==3.7.4.3`
- `typing-inspect ==0.7.1`
- `uritemplate ==3.0.1`
- `urllib3 ==1.26.7`
- `wcwidth ==0.2.5`
- `webencodings ==0.5.1`
- `werkzeug ==2.0.2`
- `wheel ==0.37.1`
- `widgetsnbextension ==3.5.2`
- `wrapt ==1.12.1`
- `zipp ==3.6.0`
</details>
<details><summary>backend/src/v2/test/requirements.txt</summary>
- `ml-metadata ==1.5.0`
- `minio ==7.0.4`
- `google-cloud-storage no version found`
</details>
<details><summary>components/contrib/arena/docker/requirements.txt</summary>
- `requests ==2.18.4`
- `six ==1.11.0`
- `pyyaml ==3.12`
</details>
<details><summary>components/contrib/sample/keras/train_classifier/requirements.txt</summary>
- `keras no version found`
</details>
<details><summary>components/gcp/container/component_sdk/python/test-requirements.txt</summary>
- `flake8 no version found`
- `pytest no version found`
- `mock no version found`
</details>
<details><summary>components/gcp/container/requirements.txt</summary>
- `pandas no version found`
</details>
<details><summary>components/kserve/requirements.txt</summary>
- `kubernetes ==19.15.0`
- `kserve ==0.7.0`
</details>
<details><summary>components/kubeflow/dnntrainer/requirements.txt</summary>
- `pyyaml ==3.12`
- `six ==1.11.0`
- `tensorflow-transform ==0.23.0`
- `tensorflow-model-analysis ==0.23.0`
</details>
<details><summary>components/kubeflow/katib-launcher/requirements.txt</summary>
- `kubernetes ==10.0.1`
- `kubeflow-katib ==0.10.1`
</details>
<details><summary>components/kubeflow/kfserving/requirements.txt</summary>
- `kubernetes ==12.0.0`
- `kfserving ==0.5.1`
</details>
<details><summary>components/kubeflow/launcher/requirements.txt</summary>
- `pyyaml no version found`
- `kubernetes no version found`
</details>
<details><summary>components/kubeflow/pytorch-launcher/requirements.txt</summary>
- `pyyaml no version found`
- `kubernetes no version found`
- `kubeflow-pytorchjob no version found`
- `retrying no version found`
</details>
<details><summary>components/local/base/requirements.txt</summary>
- `pandas ==0.24.2`
- `scikit-learn ==0.21.2`
- `scipy ==1.4.1`
- `tensorflow ==2.2.0`
</details>
<details><summary>contrib/components/openvino/ovms-deployer/containers/requirements.txt</summary>
- `jinja2 ==2.11.3`
- `futures ==3.1.1`
- `tensorflow-serving-api ==1.13.0`
</details>
<details><summary>contrib/components/openvino/predict/containers/requirements.txt</summary>
- `numpy no version found`
- `google-cloud-storage no version found`
</details>
<details><summary>docs/requirements.txt</summary>
- `sphinx ==5.0.2`
- `sphinx-click ==4.3.0`
- `sphinx-rtd-theme ==1.0.0`
- `m2r2 ==0.3.2`
</details>
<details><summary>proxy/requirements.txt</summary>
- `requests no version found`
</details>
<details><summary>requirements.txt</summary>
- `absl-py ==0.11.0`
- `ansiwrap ==0.8.4`
- `apache-beam ==2.31.0`
- `appdirs ==1.4.4`
- `argcomplete ==1.12.3`
- `argon2-cffi ==20.1.0`
- `astunparse ==1.6.3`
- `attrs ==20.3.0`
- `avro-python3 ==1.9.2.1`
- `backcall ==0.2.0`
- `black ==21.7b0`
- `bleach ==4.0.0`
- `cached-property ==1.5.2`
- `cachetools ==4.2.2`
- `certifi ==2021.5.30`
- `cffi ==1.14.6`
- `charset-normalizer ==2.0.4`
- `click ==7.1.2`
- `cloudpickle ==1.6.0`
- `colorama ==0.4.4`
- `crcmod ==1.7`
- `debugpy ==1.4.1`
- `decorator ==5.0.9`
- `defusedxml ==0.7.1`
- `deprecated ==1.2.12`
- `dill ==0.3.1.1`
- `docker ==4.4.4`
- `docopt ==0.6.2`
- `docstring-parser ==0.10`
- `entrypoints ==0.3`
- `fastavro ==1.4.4`
- `fasteners ==0.16.3`
- `fire ==0.4.0`
- `flatbuffers ==1.12`
- `future ==0.18.2`
- `gast ==0.4.0`
- `google-api-core ==1.31.2`
- `google-api-python-client ==1.12.8`
- `google-apitools ==0.5.31`
- `google-auth-httplib2 ==0.1.0`
- `google-auth-oauthlib ==0.4.5`
- `google-auth ==1.35.0`
- `google-cloud-aiplatform ==0.7.1`
- `google-cloud-bigquery ==1.28.0`
- `google-cloud-bigtable ==1.7.0`
- `google-cloud-core ==1.7.2`
- `google-cloud-datastore ==1.15.3`
- `google-cloud-dlp ==1.0.0`
- `google-cloud-language ==1.3.0`
- `google-cloud-profiler ==3.0.5`
- `google-cloud-pubsub ==1.7.0`
- `google-cloud-spanner ==1.19.1`
- `google-cloud-storage ==1.42.0`
- `google-cloud-videointelligence ==1.16.1`
- `google-cloud-vision ==1.0.0`
- `google-crc32c ==1.1.2`
- `google-pasta ==0.2.0`
- `google-resumable-media ==1.3.3`
- `googleapis-common-protos ==1.53.0`
- `grpc-google-iam-v1 ==0.12.3`
- `grpcio-gcp ==0.2.2`
- `grpcio ==1.34.1`
- `h5py ==3.1.0`
- `hdfs ==2.6.0`
- `httplib2 ==0.19.1`
- `idna ==3.2`
- `importlib-metadata ==4.6.4`
- `ipykernel ==6.2.0`
- `ipython-genutils ==0.2.0`
- `ipython ==7.26.0`
- `ipywidgets ==7.6.3`
- `jedi ==0.18.0`
- `jinja2 ==2.11.3`
- `joblib ==0.14.1`
- `jsonschema ==3.2.0`
- `junit-xml ==1.9`
- `jupyter-client ==6.1.12`
- `jupyter-core ==4.7.1`
- `jupyterlab-pygments ==0.1.2`
- `jupyterlab-widgets ==1.0.0`
- `keras-nightly ==2.5.0.dev2021032900`
- `keras-preprocessing ==1.1.2`
- `keras-tuner ==1.0.1`
- `kfp-pipeline-spec ==0.1.9`
- `kfp-server-api ==1.6.0`
- `kfp ==1.7.1`
- `kubernetes ==12.0.1`
- `markdown ==3.3.4`
- `markupsafe ==2.0.1`
- `matplotlib-inline ==0.1.2`
- `minio ==7.1.0`
- `mistune ==0.8.4`
- `ml-metadata ==1.2.0`
- `ml-pipelines-sdk ==1.2.0`
- `mypy-extensions ==0.4.3`
- `nbclient ==0.5.4`
- `nbconvert ==6.1.0`
- `nbformat ==5.1.3`
- `nest-asyncio ==1.5.1`
- `notebook ==6.4.3`
- `numpy ==1.19.5`
- `oauth2client ==4.1.3`
- `oauthlib ==3.1.1`
- `opt-einsum ==3.3.0`
- `packaging ==20.9`
- `pandas ==1.3.2`
- `pandocfilters ==1.4.3`
- `papermill ==2.3.3`
- `parso ==0.8.2`
- `pathspec ==0.9.0`
- `pexpect ==4.8.0`
- `pickleshare ==0.7.5`
- `portpicker ==1.4.0`
- `prometheus-client ==0.11.0`
- `prompt-toolkit ==3.0.19`
- `proto-plus ==1.19.0`
- `protobuf ==3.17.3`
- `ptyprocess ==0.7.0`
- `pyarrow ==2.0.0`
- `pyasn1-modules ==0.2.8`
- `pyasn1 ==0.4.8`
- `pycparser ==2.20`
- `pydot ==1.4.2`
- `pygments ==2.10.0`
- `pymongo ==3.12.0`
- `pyparsing ==2.4.7`
- `pyrsistent ==0.18.0`
- `python-dateutil ==2.8.2`
- `pytz ==2021.1`
- `pyyaml ==5.4.1`
- `pyzmq ==22.2.1`
- `regex ==2021.8.3`
- `requests-oauthlib ==1.3.0`
- `requests-toolbelt ==0.9.1`
- `requests ==2.26.0`
- `rsa ==4.7.2`
- `scikit-learn ==0.24.2`
- `scipy ==1.7.1`
- `send2trash ==1.8.0`
- `six ==1.15.0`
- `strip-hints ==0.1.10`
- `tabulate ==0.8.9`
- `tenacity ==8.0.1`
- `tensorboard-data-server ==0.6.1`
- `tensorboard-plugin-wit ==1.8.0`
- `tensorboard ==2.6.0`
- `tensorflow-data-validation ==1.2.0`
- `tensorflow-estimator ==2.5.0`
- `tensorflow-hub ==0.12.0`
- `tensorflow-metadata ==1.2.0`
- `tensorflow-model-analysis ==0.33.0`
- `tensorflow-serving-api ==2.5.1`
- `tensorflow-transform ==1.2.0`
- `tensorflow ==2.5.1`
- `termcolor ==1.1.0`
- `terminado ==0.11.0`
- `terminaltables ==3.1.0`
- `testpath ==0.5.0`
- `textwrap3 ==0.9.2`
- `tfx-bsl ==1.2.0`
- `tfx ==1.2.0`
- `threadpoolctl ==2.2.0`
- `tomli ==1.2.1`
- `tornado ==6.1`
- `tqdm ==4.62.1`
- `traitlets ==5.0.5`
- `typed-ast ==1.4.3`
- `typing-extensions ==3.7.4.3`
- `uritemplate ==3.0.1`
- `urllib3 ==1.26.6`
- `wcwidth ==0.2.5`
- `webencodings ==0.5.1`
- `websocket-client ==1.2.1`
- `werkzeug ==2.0.1`
- `wheel ==0.37.0`
- `widgetsnbextension ==3.5.1`
- `wrapt ==1.12.1`
- `yamale ==3.0.8`
- `zipp ==3.5.0`
</details>
<details><summary>samples/contrib/azure-samples/databricks-pipelines/requirements.txt</summary>
- `fire >=0.2.1`
</details>
<details><summary>samples/contrib/ibm-samples/ffdl-seldon/source/seldon-pytorch-serving-image/requirements.txt</summary>
- `torch ==1.0.0`
- `torchvision ==0.2.1`
- `boto3 ==1.9.83`
- `pandas no version found`
- `numpy no version found`
- `pyyaml no version found`
- `torchsummary no version found`
- `numpy no version found`
</details>
<details><summary>samples/contrib/nvidia-resnet/components/preprocess/requirements.txt</summary>
- `keras no version found`
</details>
<details><summary>samples/contrib/nvidia-resnet/components/train/requirements.txt</summary>
- `keras no version found`
</details>
<details><summary>samples/contrib/pytorch-samples/bert/requirements.txt</summary>
- `pytorch-lightning no version found`
- `sklearn no version found`
- `captum no version found`
- `torchtext no version found`
</details>
<details><summary>samples/contrib/pytorch-samples/cifar10/requirements.txt</summary>
- `pytorch-lightning no version found`
- `sklearn no version found`
- `captum no version found`
- `torchtext no version found`
</details>
<details><summary>samples/contrib/pytorch-samples/requirements.txt</summary>
- `boto3 no version found`
- `image no version found`
- `matplotlib no version found`
- `pyarrow no version found`
- `sklearn no version found`
- `transformers no version found`
- `torchdata no version found`
- `webdataset no version found`
- `pandas no version found`
- `s3fs no version found`
- `wget no version found`
- `torch-model-archiver no version found`
- `minio no version found`
- `kfp no version found`
- `tensorboard no version found`
- `torchmetrics no version found`
- `pytorch-lightning no version found`
</details>
<details><summary>samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/download_dataset/requirements.txt</summary>
- `kaggle no version found`
- `google-cloud-storage no version found`
</details>
<details><summary>samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/submit_result/requirements.txt</summary>
- `kaggle no version found`
- `gcsfs no version found`
</details>
<details><summary>samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/train_model/requirements.txt</summary>
- `pandas ==0.25.1`
- `gcsfs no version found`
- `numpy no version found`
- `matplotlib no version found`
- `seaborn no version found`
- `sklearn no version found`
</details>
<details><summary>samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/visualize_html/requirements.txt</summary>
- `gcsfs no version found`
- `pandas no version found`
- `matplotlib no version found`
- `seaborn no version found`
</details>
<details><summary>samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/visualize_table/requirements.txt</summary>
- `gcsfs no version found`
- `pandas no version found`
</details>
<details><summary>sdk/python/requirements.txt</summary>
- `absl-py ==1.0.0`
- `attrs ==21.4.0`
- `cachetools ==5.0.0`
- `certifi ==2021.10.8`
- `charset-normalizer ==2.0.12`
- `click ==8.1.3`
- `cloudpickle ==2.0.0`
- `deprecated ==1.2.13`
- `docstring-parser ==0.14.1`
- `fire ==0.4.0`
- `google-api-core ==2.7.3`
- `google-auth ==2.6.6`
- `google-cloud-core ==2.3.0`
- `google-cloud-storage ==2.3.0`
- `google-crc32c ==1.3.0`
- `google-resumable-media ==2.3.2`
- `googleapis-common-protos ==1.56.0`
- `idna ==3.3`
- `importlib-metadata ==4.11.3`
- `jsonschema ==3.2.0`
- `kfp-pipeline-spec ==0.1.14`
- `kfp-server-api ==2.0.0a2`
- `kubernetes ==18.20.0`
- `oauthlib ==3.2.0`
- `protobuf ==3.20.1`
- `pyasn1 ==0.4.8`
- `pyasn1-modules ==0.2.8`
- `pyrsistent ==0.18.1`
- `python-dateutil ==2.8.2`
- `pyyaml ==5.4.1`
- `requests ==2.27.1`
- `requests-oauthlib ==1.3.1`
- `requests-toolbelt ==0.9.1`
- `rsa ==4.8`
- `six ==1.16.0`
- `strip-hints ==0.1.10`
- `tabulate ==0.8.9`
- `termcolor ==1.1.0`
- `typer ==0.4.1`
- `typing-extensions ==4.2.0`
- `uritemplate ==3.0.1`
- `urllib3 ==1.26.9`
- `websocket-client ==1.3.2`
- `wheel ==0.37.1`
- `wrapt ==1.14.1`
- `zipp ==3.8.0`
</details>
<details><summary>test/kfp-functional-test/requirements.txt</summary>
- `attrs ==20.3.0`
- `cachetools ==4.1.1`
- `certifi ==2020.12.5`
- `cffi ==1.14.4`
- `chardet ==3.0.4`
- `click ==7.1.2`
- `cloudpickle ==1.6.0`
- `deprecated ==1.2.10`
- `docstring-parser ==0.7.3`
- `google-api-core ==1.23.0`
- `google-auth ==1.23.0`
- `google-cloud-core ==1.4.4`
- `google-cloud-storage ==1.32.0`
- `google-crc32c ==1.0.0`
- `google-resumable-media ==1.1.0`
- `googleapis-common-protos ==1.52.0`
- `idna ==2.10`
- `importlib-metadata ==2.1.1`
- `jsonschema ==3.2.0`
- `kfp-pipeline-spec ==0.1.3.1`
- `kfp-server-api ==1.1.2rc1`
- `kfp ==1.1.2`
- `kubernetes ==11.0.0`
- `oauthlib ==3.1.0`
- `protobuf ==3.14.0`
- `pyasn1-modules ==0.2.8`
- `pyasn1 ==0.4.8`
- `pycparser ==2.20`
- `pyrsistent ==0.17.3`
- `python-dateutil ==2.8.1`
- `pytz ==2020.4`
- `pyyaml ==5.3.1`
- `requests-oauthlib ==1.3.0`
- `requests-toolbelt ==0.9.1`
- `requests ==2.25.0`
- `rsa ==4.6`
- `six ==1.15.0`
- `strip-hints ==0.1.9`
- `tabulate ==0.8.7`
- `urllib3 ==1.26.2`
- `websocket-client ==0.57.0`
- `wheel ==0.36.1`
- `wrapt ==1.12.1`
- `zipp ==1.2.0`
</details>
<details><summary>test/sample-test/requirements.txt</summary>
- `absl-py ==0.11.0`
- `ansiwrap ==0.8.4`
- `apache-beam ==2.34.0`
- `appdirs ==1.4.4`
- `argcomplete ==1.12.3`
- `argon2-cffi-bindings ==21.2.0`
- `argon2-cffi ==21.3.0`
- `astunparse ==1.6.3`
- `attrs ==20.3.0`
- `avro-python3 ==1.9.2.1`
- `backcall ==0.2.0`
- `black ==21.7b0`
- `bleach ==4.1.0`
- `cached-property ==1.5.2`
- `cachetools ==4.2.4`
- `certifi ==2021.10.8`
- `cffi ==1.15.0`
- `charset-normalizer ==2.0.9`
- `clang ==5.0`
- `click ==7.1.2`
- `cloudpickle ==2.0.0`
- `crcmod ==1.7`
- `debugpy ==1.5.1`
- `decorator ==5.1.0`
- `defusedxml ==0.7.1`
- `deprecated ==1.2.13`
- `dill ==0.3.1.1`
- `docker ==4.4.4`
- `docopt ==0.6.2`
- `docstring-parser ==0.13`
- `entrypoints ==0.3`
- `fastavro ==1.4.7`
- `fasteners ==0.16.3`
- `fire ==0.4.0`
- `flatbuffers ==1.12`
- `future ==0.18.2`
- `gast ==0.4.0`
- `google-api-core ==1.31.5`
- `google-api-python-client ==1.12.8`
- `google-apitools ==0.5.31`
- `google-auth-httplib2 ==0.1.0`
- `google-auth-oauthlib ==0.4.6`
- `google-auth ==1.35.0`
- `google-cloud-aiplatform ==1.8.1`
- `google-cloud-bigquery-storage ==2.10.1`
- `google-cloud-bigquery ==2.31.0`
- `google-cloud-bigtable ==1.7.0`
- `google-cloud-core ==1.7.2`
- `google-cloud-datastore ==1.15.3`
- `google-cloud-dlp ==1.0.0`
- `google-cloud-language ==1.3.0`
- `google-cloud-pubsub ==1.7.0`
- `google-cloud-recommendations-ai ==0.2.0`
- `google-cloud-spanner ==1.19.1`
- `google-cloud-storage ==1.43.0`
- `google-cloud-videointelligence ==1.16.1`
- `google-cloud-vision ==1.0.0`
- `google-crc32c ==1.3.0`
- `google-pasta ==0.2.0`
- `google-resumable-media ==2.1.0`
- `googleapis-common-protos ==1.54.0`
- `grpc-google-iam-v1 ==0.12.3`
- `grpcio-gcp ==0.2.2`
- `grpcio ==1.43.0`
- `h5py ==3.1.0`
- `hdfs ==2.6.0`
- `httplib2 ==0.19.1`
- `idna ==3.3`
- `importlib-metadata ==4.10.0`
- `ipykernel ==6.6.0`
- `ipython-genutils ==0.2.0`
- `ipython ==7.30.1`
- `ipywidgets ==7.6.5`
- `jedi ==0.18.1`
- `jinja2 ==3.0.3`
- `joblib ==0.14.1`
- `jsonschema ==3.2.0`
- `junit-xml ==1.9`
- `jupyter-client ==7.1.0`
- `jupyter-core ==4.9.1`
- `jupyterlab-pygments ==0.1.2`
- `jupyterlab-widgets ==1.0.2`
- `keras-preprocessing ==1.1.2`
- `keras-tuner ==1.1.0`
- `keras ==2.6.0`
- `kfp-pipeline-spec ==0.1.13`
- `kfp-server-api ==1.7.1`
- `kfp ==1.8.10`
- `kt-legacy ==1.0.4`
- `kubernetes ==12.0.1`
- `libcst ==0.3.23`
- `markdown ==3.3.6`
- `markupsafe ==2.0.1`
- `matplotlib-inline ==0.1.3`
- `minio ==7.1.2`
- `mistune ==0.8.4`
- `ml-metadata ==1.4.0`
- `ml-pipelines-sdk ==1.4.0`
- `mypy-extensions ==0.4.3`
- `nbclient ==0.5.9`
- `nbconvert ==6.3.0`
- `nbformat ==5.1.3`
- `nest-asyncio ==1.5.4`
- `notebook ==6.4.6`
- `numpy ==1.19.5`
- `oauth2client ==4.1.3`
- `oauthlib ==3.1.1`
- `opt-einsum ==3.3.0`
- `orjson ==3.6.5`
- `packaging ==20.9`
- `pandas ==1.3.5`
- `pandocfilters ==1.5.0`
- `papermill ==2.3.3`
- `parso ==0.8.3`
- `pathspec ==0.9.0`
- `pexpect ==4.8.0`
- `pickleshare ==0.7.5`
- `portpicker ==1.5.0`
- `prometheus-client ==0.12.0`
- `prompt-toolkit ==3.0.24`
- `proto-plus ==1.19.8`
- `protobuf ==3.19.1`
- `psutil ==5.8.0`
- `ptyprocess ==0.7.0`
- `pyarrow ==5.0.0`
- `pyasn1-modules ==0.2.8`
- `pyasn1 ==0.4.8`
- `pycparser ==2.21`
- `pydantic ==1.8.2`
- `pydot ==1.4.2`
- `pygments ==2.10.0`
- `pymongo ==3.12.3`
- `pyparsing ==2.4.7`
- `pyrsistent ==0.18.0`
- `python-dateutil ==2.8.2`
- `pytz ==2021.3`
- `pyyaml ==5.4.1`
- `pyzmq ==22.3.0`
- `regex ==2021.11.10`
- `requests-oauthlib ==1.3.0`
- `requests-toolbelt ==0.9.1`
- `requests ==2.26.0`
- `rsa ==4.8`
- `scipy ==1.7.3`
- `send2trash ==1.8.0`
- `six ==1.15.0`
- `strip-hints ==0.1.10`
- `tabulate ==0.8.9`
- `tenacity ==8.0.1`
- `tensorboard-data-server ==0.6.1`
- `tensorboard-plugin-wit ==1.8.0`
- `tensorboard ==2.6.0`
- `tensorflow-data-validation ==1.4.0`
- `tensorflow-estimator ==2.6.0`
- `tensorflow-hub ==0.12.0`
- `tensorflow-metadata ==1.4.0`
- `tensorflow-model-analysis ==0.35.0`
- `tensorflow-serving-api ==2.6.2`
- `tensorflow-transform ==1.4.0`
- `tensorflow ==2.6.2`
- `termcolor ==1.1.0`
- `terminado ==0.12.1`
- `testpath ==0.5.0`
- `textwrap3 ==0.9.2`
- `tfx-bsl ==1.4.0`
- `tfx ==1.4.0`
- `tomli ==1.2.3`
- `tornado ==6.1`
- `tqdm ==4.62.3`
- `traitlets ==5.1.1`
- `typed-ast ==1.5.1`
- `typer ==0.4.0`
- `typing-extensions ==3.7.4.3`
- `typing-inspect ==0.7.1`
- `uritemplate ==3.0.1`
- `urllib3 ==1.26.7`
- `wcwidth ==0.2.5`
- `webencodings ==0.5.1`
- `websocket-client ==1.2.3`
- `werkzeug ==2.0.2`
- `wheel ==0.37.1`
- `widgetsnbextension ==3.5.2`
- `wrapt ==1.12.1`
- `yamale ==4.0.2`
- `zipp ==3.6.0`
</details>
</blockquote>
</details>
<details><summary>pip_setup</summary>
<blockquote>
<details><summary>api/v2alpha1/python/setup.py</summary>
- `protobuf >=3.13.0,<4`
</details>
<details><summary>components/PyTorch/pytorch-kfp-components/setup.py</summary>
- `pytorch-lightning >=1.4.0`
- `torch >=1.7.1`
- `mock >=4.0.0`
- `flake8 >=3.0.0`
- `pytest >=6.0.0`
</details>
<details><summary>components/contrib/arena/python/setup.py</summary>
- `kfp >= 0.1`
</details>
<details><summary>components/gcp/container/component_sdk/python/setup.py</summary>
- `kubernetes >= 8.0.1`
- `urllib3 >=1.15,<1.25`
- `fire == 0.1.3`
- `google-api-python-client == 1.7.8`
- `google-cloud-storage == 1.14.0`
- `google-cloud-bigquery == 1.9.0`
</details>
<details><summary>components/kubeflow/dnntrainer/src/setup.py</summary>
- `tensorflow ==1.15.4`
</details>
<details><summary>samples/test/utils/setup.py</summary>
- `nbconvert ~=6.0`
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5068/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5067 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5067/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5067/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5067/events | https://github.com/kubeflow/pipelines/issues/5067 | 798,279,201 | MDU6SXNzdWU3OTgyNzkyMDE= | 5,067 | Dataset location on BigQuery component | {
"login": "BorFour",
"id": 25534385,
"node_id": "MDQ6VXNlcjI1NTM0Mzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/25534385?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BorFour",
"html_url": "https://github.com/BorFour",
"followers_url": "https://api.github.com/users/BorFour/followers",
"following_url": "https://api.github.com/users/BorFour/following{/other_user}",
"gists_url": "https://api.github.com/users/BorFour/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BorFour/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BorFour/subscriptions",
"organizations_url": "https://api.github.com/users/BorFour/orgs",
"repos_url": "https://api.github.com/users/BorFour/repos",
"events_url": "https://api.github.com/users/BorFour/events{/privacy}",
"received_events_url": "https://api.github.com/users/BorFour/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1126834402,
"node_id": "MDU6TGFiZWwxMTI2ODM0NDAy",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components",
"name": "area/components",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | closed | false | {
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I think the location should be exposed in the https://github.com/kubeflow/pipelines/blob/master/components/gcp/bigquery/query/to_CSV/component.yaml",
"/cc @chensun @neuromage ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-02-01T11:55:35 | 2022-04-28T18:00:19 | 2022-04-28T18:00:19 | NONE | null | I am using the [bigquery to_CSV component](https://raw.githubusercontent.com/kubeflow/pipelines/1.3.0/components/gcp/bigquery/query/to_CSV/component.yaml) like so:
```python
from kfp import dsl
from kfp.components import load_component_from_url
BQ_SPEC_URI = "https://raw.githubusercontent.com/kubeflow/pipelines/1.3.0/components/gcp/bigquery/query/to_CSV/component.yaml"
bq_to_csv_op = load_component_from_url(BQ_SPEC_URI)
...
@dsl.pipeline(name="My first pipeline", description="A hello world pipeline.")
def print_query_result_pipeline(
project_id=PROJECT_ID, query=QUERY, filename=FILENAME
):
bq_to_gcs_task = bq_to_csv_op(
query=query, project_id=project_id, output_filename=filename
)
```
The query tries to acces a dataset with location EU multiregional. My pipeline fails with `google.api_core.exceptions.NotFound: 404 Not found: Dataset XXXXX was not found in location US`.
I've noticed that, in the [component Python code](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/component_sdk/python/kfp_component/google/bigquery/_query.py), there is an `dataset_location` argument to the `_query()` function that is set 'US' by default. Nonetheless, that argument isn't "exposed" in the runtime arguments of the component's YAML. Should a new argument be added to this file or should the dataset_location be passed otherwise?
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5067/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5061 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5061/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5061/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5061/events | https://github.com/kubeflow/pipelines/issues/5061 | 797,026,728 | MDU6SXNzdWU3OTcwMjY3Mjg= | 5,061 | present KFP at the Argo Workflows community meeting | {
"login": "alexec",
"id": 1142830,
"node_id": "MDQ6VXNlcjExNDI4MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1142830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexec",
"html_url": "https://github.com/alexec",
"followers_url": "https://api.github.com/users/alexec/followers",
"following_url": "https://api.github.com/users/alexec/following{/other_user}",
"gists_url": "https://api.github.com/users/alexec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexec/subscriptions",
"organizations_url": "https://api.github.com/users/alexec/orgs",
"repos_url": "https://api.github.com/users/alexec/repos",
"events_url": "https://api.github.com/users/alexec/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexec/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | closed | false | {
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
},
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thank you for the invitation!\nI'm recently very busy with KFP v2 design I just shared with community. I'd love to sign up for a slot after a while when I get more time.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-01-29T16:43:22 | 2022-04-29T03:59:43 | 2022-04-29T03:59:43 | NONE | null | We'd love you to come and present at our community meeting! http://bit.ly/argo-wf-cmty-mtng
Or what about writing a guest blog post? https://blog.argoproj.io/
I think our community would be super interested. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5061/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5061/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5055 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5055/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5055/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5055/events | https://github.com/kubeflow/pipelines/issues/5055 | 796,655,777 | MDU6SXNzdWU3OTY2NTU3Nzc= | 5,055 | Feature Request: kfp.dsl.ResourceOp should have a caching strategy option | {
"login": "munagekar",
"id": 10258799,
"node_id": "MDQ6VXNlcjEwMjU4Nzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/10258799?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/munagekar",
"html_url": "https://github.com/munagekar",
"followers_url": "https://api.github.com/users/munagekar/followers",
"following_url": "https://api.github.com/users/munagekar/following{/other_user}",
"gists_url": "https://api.github.com/users/munagekar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/munagekar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/munagekar/subscriptions",
"organizations_url": "https://api.github.com/users/munagekar/orgs",
"repos_url": "https://api.github.com/users/munagekar/repos",
"events_url": "https://api.github.com/users/munagekar/events{/privacy}",
"received_events_url": "https://api.github.com/users/munagekar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"ResourceOp is implemented using Argo. Fixing #4083 should provide this feature."
] | 2021-01-29T07:36:00 | 2021-01-29T07:44:17 | 2021-01-29T07:44:17 | CONTRIBUTOR | null | Currently only kfp.dsl.ContainerOp has caching strategy option, ResourceOp should have similar options, this Op also gets cached by minio.
REF: https://www.kubeflow.org/docs/pipelines/caching/#managing-caching-staleness
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5055/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5053 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5053/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5053/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5053/events | https://github.com/kubeflow/pipelines/issues/5053 | 796,609,645 | MDU6SXNzdWU3OTY2MDk2NDU= | 5,053 | TypeErro occurs in gcp/automl/create_dataset_for_tables component | {
"login": "yuku",
"id": 96157,
"node_id": "MDQ6VXNlcjk2MTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/96157?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuku",
"html_url": "https://github.com/yuku",
"followers_url": "https://api.github.com/users/yuku/followers",
"following_url": "https://api.github.com/users/yuku/following{/other_user}",
"gists_url": "https://api.github.com/users/yuku/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuku/subscriptions",
"organizations_url": "https://api.github.com/users/yuku/orgs",
"repos_url": "https://api.github.com/users/yuku/repos",
"events_url": "https://api.github.com/users/yuku/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuku/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1126834402,
"node_id": "MDU6TGFiZWwxMTI2ODM0NDAy",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components",
"name": "area/components",
"color": "d2b48c",
"default": false,
"description": ""
}
] | closed | false | {
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2021-01-29T06:09:04 | 2021-03-09T21:44:25 | 2021-01-29T07:33:02 | CONTRIBUTOR | null | ### What steps did you take:
[A clear and concise description of what the bug is.]
[gcp/automl/create_dataset_for_tables component](https://github.com/kubeflow/pipelines/tree/master/components/gcp/automl/create_dataset_for_tables)'s `create_time` output is declared as a string:
https://github.com/kubeflow/pipelines/blob/ecb14f40bb819c0678589b6458892ece5369fa71/components/gcp/automl/create_dataset_for_tables/component.yaml#L15
however, `google.protobuf.timestamp_pb2.Timestamp` is returned in actual fact:
https://github.com/kubeflow/pipelines/blob/ecb14f40bb819c0678589b6458892ece5369fa71/components/gcp/automl/create_dataset_for_tables/component.py#L54
FYI: The `dataset` object is an instance of `google.cloud.automl_v1beta1.types.Dataset` class and its [document](https://googleapis.dev/python/automl/0.4.0/gapic/v1beta1/types.html#google.cloud.automl_v1beta1.types.Dataset.create_time) says:
> **create_time**
> Output only. Timestamp when this dataset was created.
### What happened:
`TypeError` occurs
![image](https://user-images.githubusercontent.com/96157/106237273-cf955a00-6241-11eb-91e2-2c53e4e82623.png)
### What did you expect to happen:
Work.
### Environment:
<!-- Please fill in those that seem relevant. -->
How did you deploy Kubeflow Pipelines (KFP)? AI Platform Pipelines
<!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). -->
KFP version: 1.0.4 <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. -->
KFP SDK version: 1.3.0 <!-- Please attach the output of this shell command: $pip list | grep kfp -->
### Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind bug
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5053/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5051 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5051/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5051/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5051/events | https://github.com/kubeflow/pipelines/issues/5051 | 796,312,193 | MDU6SXNzdWU3OTYzMTIxOTM= | 5,051 | Unpin pip version in presubmit-tests-tfx.sh | {
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 930619528,
"node_id": "MDU6TGFiZWw5MzA2MTk1Mjg=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/testing",
"name": "area/testing",
"color": "00daff",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | closed | false | null | [] | null | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-01-28T19:59:22 | 2022-04-28T18:00:28 | 2022-04-28T18:00:28 | COLLABORATOR | null | Unpin pip version once we figure out how to make the new dependency resolver in pip 20.3+ work in our case.
Related issues:
#5049
#4853 | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5051/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5049 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5049/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5049/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5049/events | https://github.com/kubeflow/pipelines/issues/5049 | 796,256,892 | MDU6SXNzdWU3OTYyNTY4OTI= | 5,049 | Presubmit test kubeflow-pipelines-tfx-python36 failing | {
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"pip 21.0 came out on 1/23, and it still has the same dependency resolver issue as we hit with pip 20.3.*\r\n\r\nPrevious issue: #4853",
"This is due to a recent change from TFX side: https://github.com/tensorflow/tfx/issues/3157"
] | 2021-01-28T18:44:20 | 2021-01-29T23:16:02 | 2021-01-29T23:16:02 | COLLABORATOR | null | https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/5042/kubeflow-pipelines-tfx-python36/1354678589223604224
Times out while waiting for pip install to finish.
```
...
INFO: pip is looking at multiple versions of requests-oauthlib to determine which version is compatible with other requirements. This could take a while.
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. If you want to abort this run, you can press Ctrl + C to do so. To improve how pip performs, tell us what happened here: https://pip.pypa.io/surveys/backtracking
INFO: pip is looking at multiple versions of markdown to determine which version is compatible with other requirements. This could take a while.
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. If you want to abort this run, you can press Ctrl + C to do so. To improve how pip performs, tell us what happened here: https://pip.pypa.io/surveys/backtracking
INFO: pip is looking at multiple versions of google-auth-oauthlib to determine which version is compatible with other requirements. This could take a while.
Collecting google-auth-oauthlib<0.5,>=0.4.1
Downloading google_auth_oauthlib-0.4.1-py2.py3-none-any.whl (18 kB)
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2021-01-28T08:32:17Z"}
{"component":"entrypoint","file":"prow/entrypoint/run.go:250","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2021-01-28T08:32:33Z"}
``` | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5049/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5048 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5048/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5048/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5048/events | https://github.com/kubeflow/pipelines/issues/5048 | 796,074,939 | MDU6SXNzdWU3OTYwNzQ5Mzk= | 5,048 | How to make file_outputs dynamic in the KFP SDK? | {
"login": "PatrickGhosn",
"id": 33845447,
"node_id": "MDQ6VXNlcjMzODQ1NDQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/33845447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PatrickGhosn",
"html_url": "https://github.com/PatrickGhosn",
"followers_url": "https://api.github.com/users/PatrickGhosn/followers",
"following_url": "https://api.github.com/users/PatrickGhosn/following{/other_user}",
"gists_url": "https://api.github.com/users/PatrickGhosn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PatrickGhosn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PatrickGhosn/subscriptions",
"organizations_url": "https://api.github.com/users/PatrickGhosn/orgs",
"repos_url": "https://api.github.com/users/PatrickGhosn/repos",
"events_url": "https://api.github.com/users/PatrickGhosn/events{/privacy}",
"received_events_url": "https://api.github.com/users/PatrickGhosn/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1682717392,
"node_id": "MDU6TGFiZWwxNjgyNzE3Mzky",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/question",
"name": "kind/question",
"color": "2515fc",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"file_outputs is deprecated, please refer to `component` for the use case: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.html",
"Thank you. Yes we ended up using outputs."
] | 2021-01-28T14:49:30 | 2021-03-07T08:24:01 | 2021-03-07T08:24:01 | NONE | null | /kind question
```
import json
import kfp
from kfp import components
from kfp import dsl
import os
import subprocess
from kfp.components import func_to_container_op, InputPath, OutputPath
def compute_test_features(
input_test_path,
output_path : OutputPath
):
return dsl.ContainerOp(
name='Compute Mesh Features',
image='gcr.io/brightclue-mlops/component-featurecomputation:v1',
arguments=[
'--input-test-path', input_test_path,
],
file_outputs={'output':output_path},
)
```
But when I try to compile the pipeline I get the below:
`TypeError: Object of type PipelineParam is not JSON serializable`
How can I have the file output be a parameter/dynamic? | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5048/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5046 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5046/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5046/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5046/events | https://github.com/kubeflow/pipelines/issues/5046 | 795,754,203 | MDU6SXNzdWU3OTU3NTQyMDM= | 5,046 | Kubeflow 1.2 PVC volume is not getting deleted | {
"login": "ajinkya933",
"id": 17012391,
"node_id": "MDQ6VXNlcjE3MDEyMzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/17012391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ajinkya933",
"html_url": "https://github.com/ajinkya933",
"followers_url": "https://api.github.com/users/ajinkya933/followers",
"following_url": "https://api.github.com/users/ajinkya933/following{/other_user}",
"gists_url": "https://api.github.com/users/ajinkya933/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ajinkya933/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ajinkya933/subscriptions",
"organizations_url": "https://api.github.com/users/ajinkya933/orgs",
"repos_url": "https://api.github.com/users/ajinkya933/repos",
"events_url": "https://api.github.com/users/ajinkya933/events{/privacy}",
"received_events_url": "https://api.github.com/users/ajinkya933/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 930619528,
"node_id": "MDU6TGFiZWw5MzA2MTk1Mjg=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/testing",
"name": "area/testing",
"color": "00daff",
"default": false,
"description": ""
},
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"In my case, I manually delete volume using ResourceOp with delete action in ExitHandler\r\n\r\nhttps://github.com/kubeflow/pipelines/issues/1779#issuecomment-674085829",
"This should be unrelated to KFP. It sounds like an issue/configuration of the storage class not deleting dynamically provisioned volumes on PVC deletion.\r\n\r\nDo you confirm that the PVC is deleted?\r\n\r\nI would expect some configuration to allow this sequence of actions:\r\n1. User deletes PVC\r\n2. CSI deletes PV\r\n3. Volume is deleted from AWS\r\n\r\nI may be missing something though",
"Agree with elikatsis on this"
] | 2021-01-28T07:38:17 | 2021-03-05T00:45:21 | 2021-03-05T00:45:21 | NONE | null | ### What steps did you take:
[A clear and concise description of what the bug is.]
### What happened:
I am able to train MNIST dataset and make predictions on Kubeflow using attached code:
[mnist.zip](https://github.com/kubeflow/pipelines/files/5885493/mnist.zip)
The execution of attached code on Kubeflow is as shown.
![volume_delete](https://user-images.githubusercontent.com/17012391/106105082-6510eb00-6169-11eb-8894-cd9ed123d102.png)
However even after deleting the volume, the actual volume on AWS ec2 is only getting detatched, its not getting deleted.
Detached volume on AWS ec2:
![Screenshot from 2021-01-28 13-06-14](https://user-images.githubusercontent.com/17012391/106105236-ad300d80-6169-11eb-959f-592cf3ceba1a.png)
### What did you expect to happen:
I expected the volume to get deleted
How Do I properly delete the volume ?
### Environment:
kfp 1.3.0
kfp-pipeline-spec 0.1.4
kfp-server-api 1.2.0
/kind bug
/area sdk
/area testing
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5046/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5046/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5038 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5038/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5038/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5038/events | https://github.com/kubeflow/pipelines/issues/5038 | 794,335,060 | MDU6SXNzdWU3OTQzMzUwNjA= | 5,038 | ml_engine does not expose "labels" | {
"login": "dkajtoch",
"id": 32985207,
"node_id": "MDQ6VXNlcjMyOTg1MjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/32985207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dkajtoch",
"html_url": "https://github.com/dkajtoch",
"followers_url": "https://api.github.com/users/dkajtoch/followers",
"following_url": "https://api.github.com/users/dkajtoch/following{/other_user}",
"gists_url": "https://api.github.com/users/dkajtoch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dkajtoch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dkajtoch/subscriptions",
"organizations_url": "https://api.github.com/users/dkajtoch/orgs",
"repos_url": "https://api.github.com/users/dkajtoch/repos",
"events_url": "https://api.github.com/users/dkajtoch/events{/privacy}",
"received_events_url": "https://api.github.com/users/dkajtoch/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | closed | false | null | [] | null | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-01-26T15:40:16 | 2022-04-28T18:00:22 | 2022-04-28T18:00:22 | CONTRIBUTOR | null | I am writing kubeflow pipeline with Ai platform train job, but can't add argument "labels" (e.g. tracking costs) into ML_ENGINE_TRAIN_OP since there is no such thing. I can add labels via gcloud, REST or even Airflow has one (https://github.com/apache/airflow/blob/7f4c88c0680b4fb98fe8b31800a93e1d0476c4db/airflow/providers/google/cloud/operators/mlengine.py#L1072)
```
ML_ENGINE_TRAIN_OP = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.3.0/components/gcp/ml_engine/train/component.yaml')
``` | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5038/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5037 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5037/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5037/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5037/events | https://github.com/kubeflow/pipelines/issues/5037 | 794,250,072 | MDU6SXNzdWU3OTQyNTAwNzI= | 5,037 | Issues with deploy model | {
"login": "devilmetal",
"id": 3411049,
"node_id": "MDQ6VXNlcjM0MTEwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3411049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/devilmetal",
"html_url": "https://github.com/devilmetal",
"followers_url": "https://api.github.com/users/devilmetal/followers",
"following_url": "https://api.github.com/users/devilmetal/following{/other_user}",
"gists_url": "https://api.github.com/users/devilmetal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/devilmetal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/devilmetal/subscriptions",
"organizations_url": "https://api.github.com/users/devilmetal/orgs",
"repos_url": "https://api.github.com/users/devilmetal/repos",
"events_url": "https://api.github.com/users/devilmetal/events{/privacy}",
"received_events_url": "https://api.github.com/users/devilmetal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 930476737,
"node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted",
"name": "help wanted",
"color": "db1203",
"default": true,
"description": "The community is welcome to contribute."
},
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | closed | false | null | [] | null | [
"Has a fix been implemented yet and is there a working version with the model object that we can use in the meantime?",
"@devilmetal Thank you for the detailed report!\r\nThis hasn't been implemented, PRs welcomed.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-01-26T13:55:04 | 2022-04-18T17:27:40 | 2022-04-18T17:27:40 | NONE | null | ### What steps did you take:
After trying to use the "model" parameter of the ml_engine deploy component, we are now facing issue regarding deployment of model with the same names.
https://github.com/kubeflow/pipelines/tree/master/components/gcp/ml_engine/deploy
### What happened:
While using 'replace_existing_version' parameter, the component raison 409 error.
### What did you expect to happen:
While using 'replace_existing_version' parameter, the model version is replace by the one provided
"returned "Field: model.name Error: A model with the same name already exists."
### Environment:
Jupyter hub - Running on GCP
KubeFlow - Running on GCP
How did you deploy Kubeflow Pipelines (KFP)?
Using kubeflow components and python.
KFP version: 1.3.0
KFP SDK version: 1.3.0
/kind bug
Our main problem is regarding deployment. When using the "model" parameter with the following values, a 409 error raise and the model is actually not deployed. Without the "model" parameter, the behaviour is correct as expected.
model={
'name': 'some_name',
'regions': [
"europe-west1"
],
'onlinePredictionLogging': True,
'onlinePredictionConsoleLogging': True,
'labels': {
'some_label': 'some_value'
}
}
We spotted the faulty line here :
https://github.com/kubeflow/pipelines/blob/32ce8d8f90bfc8f89a2a3c347ad906f99ba776a8/components/gcp/container/component_sdk/python/kfp_component/google/ml_engine/_create_model.py#L69
It seems that the comparison between the existing model and the one passed by parameter is not matching.
While looking at the code we also spotted a few potential issues
At this line, the model name seems to be always overwritten, even if you specify it in the model object given as parameter.
https://github.com/kubeflow/pipelines/blob/32ce8d8f90bfc8f89a2a3c347ad906f99ba776a8/components/gcp/container/component_sdk/python/kfp_component/google/ml_engine/_create_model.py#L83
At this line, you might have omitted the field 'onlinePredictionConsoleLogging' and other new fields. https://github.com/kubeflow/pipelines/blob/32ce8d8f90bfc8f89a2a3c347ad906f99ba776a8/components/gcp/container/component_sdk/python/kfp_component/google/ml_engine/_create_model.py#L92
Here is the stacktrace
`Traceback (most recent call last):
File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/ml/kfp_component/launcher/__main__.py", line 45, in <module>
main()
File "/ml/kfp_component/launcher/__main__.py", line 42, in main
launch(args.file_or_module, args.args)
File "/ml/kfp_component/launcher/launcher.py", line 45, in launch
return fire.Fire(module, command=args, name=module.__name__)
File "/usr/local/lib/python3.7/site-packages/fire/core.py", line 127, in Fire
component_trace = _Fire(component, args, context, name)
File "/usr/local/lib/python3.7/site-packages/fire/core.py", line 366, in _Fire
component, remaining_args)
File "/usr/local/lib/python3.7/site-packages/fire/core.py", line 542, in _CallCallable
result = fn(*varargs, **kwargs)
File "/ml/kfp_component/google/ml_engine/_deploy.py", line 66, in deploy
model_name_output_path=model_name_output_path,
File "/ml/kfp_component/google/ml_engine/_create_model.py", line 39, in create_model
model_object_output_path=model_object_output_path,
File "/ml/kfp_component/google/ml_engine/_create_model.py", line 65, in execute
model = self._model)
File "/ml/kfp_component/google/ml_engine/_client.py", line 82, in create_model
body = model
File "/usr/local/lib/python3.7/site-packages/googleapiclient/_helpers.py", line 130, in positional_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/googleapiclient/http.py", line 851, in execute
raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 409 when requesting
https://ml.googleapis.com/v1/projects/xxxxxxxxx/models?alt=json
returned "Field: model.name Error: A model with the same name already exists.">` | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5037/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5032 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5032/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5032/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5032/events | https://github.com/kubeflow/pipelines/issues/5032 | 793,676,497 | MDU6SXNzdWU3OTM2NzY0OTc= | 5,032 | Kubeflow 1.0.2 - pipelines upgrade - failing ... | {
"login": "pkasson-ascena",
"id": 77991655,
"node_id": "MDQ6VXNlcjc3OTkxNjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/77991655?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pkasson-ascena",
"html_url": "https://github.com/pkasson-ascena",
"followers_url": "https://api.github.com/users/pkasson-ascena/followers",
"following_url": "https://api.github.com/users/pkasson-ascena/following{/other_user}",
"gists_url": "https://api.github.com/users/pkasson-ascena/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pkasson-ascena/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pkasson-ascena/subscriptions",
"organizations_url": "https://api.github.com/users/pkasson-ascena/orgs",
"repos_url": "https://api.github.com/users/pkasson-ascena/repos",
"events_url": "https://api.github.com/users/pkasson-ascena/events{/privacy}",
"received_events_url": "https://api.github.com/users/pkasson-ascena/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The upgrade command is only supported by Kubeflow Pipelines standalone. You cannot use it to upgrade Kubeflow.",
"https://www.kubeflow.org/docs/pipelines/installation/overview/",
"We were trying to upgrade only pipelines - possibly the pasted results from above were from the other approach, that being a frest install. Does the upgrade to pipelines work, if you are on KF 1.0.2 and whatever the version of pipelines is for that ?"
] | 2021-01-25T19:49:00 | 2021-01-26T13:46:52 | 2021-01-26T07:41:36 | NONE | null | Referred to the upgrade website and ran the command:
kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/env/dev?ref=$PIPELINE_VERSION"
with pipeline version being: 1.3
Here is the output:
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"application-crd-id\":\"kubeflow-pipelines\",\"component\":\"metadata-envoy\"},\"name\":\"metadata-envoy-deployment\",\"namespace\":\"kubeflow\"},\"spec\":{\"replicas\":1,\"selector\":{\"matchLabels\":{\"application-crd-id\":\"kubeflow-pipelines\",\"component\":\"metadata-envoy\"}},\"template\":{\"metadata\":{\"labels\":{\"application-crd-id\":\"kubeflow-pipelines\",\"component\":\"metadata-envoy\"}},\"spec\":{\"containers\":[{\"image\":\"gcr.io/ml-pipeline/metadata-envoy:1.3.0\",\"name\":\"container\",\"ports\":[{\"containerPort\":9090,\"name\":\"md-envoy\"},{\"containerPort\":9901,\"name\":\"envoy-admin\"}]}]}}}}\n"},"labels":{"app.kubernetes.io/component":null,"app.kubernetes.io/instance":null,"app.kubernetes.io/managed-by":null,"app.kubernetes.io/name":null,"app.kubernetes.io/part-of":null,"app.kubernetes.io/version":null,"application-crd-id":"kubeflow-pipelines","component":"metadata-envoy","kustomize.component":null}},"spec":{"selector":{"matchLabels":{"app.kubernetes.io/component":null,"app.kubernetes.io/instance":null,"app.kubernetes.io/managed-by":null,"app.kubernetes.io/name":null,"app.kubernetes.io/part-of":null,"app.kubernetes.io/version":null,"application-crd-id":"kubeflow-pipelines","component":"metadata-envoy","kustomize.component":null}},"template":{"metadata":{"labels":{"app.kubernetes.io/component":null,"app.kubernetes.io/instance":null,"app.kubernetes.io/managed-by":null,"app.kubernetes.io/name":null,"app.kubernetes.io/part-of":null,"app.kubernetes.io/version":null,"application-crd-id":"kubeflow-pipelines","component":"metadata-envoy","kustomize.component":null}},"spec":{"$setElementOrder/containers":[{"name":"container"}],"containers":[{"image":"gcr.io/ml-pipeline/metadata-envoy:1.3.0","name":"container"}]}}}}
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5032/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5031 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5031/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5031/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5031/events | https://github.com/kubeflow/pipelines/issues/5031 | 793,256,323 | MDU6SXNzdWU3OTMyNTYzMjM= | 5,031 | `ml-pipeline-ui` cannot be configured for volume-support | {
"login": "Mar-ai",
"id": 45969469,
"node_id": "MDQ6VXNlcjQ1OTY5NDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/45969469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mar-ai",
"html_url": "https://github.com/Mar-ai",
"followers_url": "https://api.github.com/users/Mar-ai/followers",
"following_url": "https://api.github.com/users/Mar-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/Mar-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mar-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mar-ai/subscriptions",
"organizations_url": "https://api.github.com/users/Mar-ai/orgs",
"repos_url": "https://api.github.com/users/Mar-ai/repos",
"events_url": "https://api.github.com/users/Mar-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mar-ai/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 930619516,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTY=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend",
"name": "area/frontend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1682717392,
"node_id": "MDU6TGFiZWwxNjgyNzE3Mzky",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/question",
"name": "kind/question",
"color": "2515fc",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | closed | false | {
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Note, in multi user mode, there's an kfp artifact server in each user's namespace. You should be able to mount the volume there.",
"but in general, this feature in in Alpha stage, it might not work out. I'd love to hear your feedback if it could be configured to work for you, or do you have any suggestions to improve it?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-01-25T10:30:45 | 2022-04-28T18:00:27 | 2022-04-28T18:00:27 | NONE | null | ### What steps did you take:
I'm starting a TFX-pipeline in a jupyter notebook in namespace `admin`. According to the [TFX kubeflow-on premise example](https://github.com/tensorflow/tfx/blob/master/tfx/examples/chicago_taxi_pipeline/taxi_pipeline_kubeflow_local.py), the PV is mounted as follows:
```python
_persistent_volume_claim = 'pvc-tfx'
_persistent_volume = 'pv-tfx'
_persistent_volume_mount = '/mnt'
```
and used in
```python
...
kfp.onprem.mount_pvc(_persistent_volume_claim, _persistent_volume,
_persistent_volume_mount)
...
```
PV and PVC have been created in namespace `admin`.
The pipeline succeeds. Now I want to display the generated artifacts in the kubeflow UI. I'm following the instructions here to patch the `ml-pipeline-ui` in order to generate visualizations from volumes:
https://github.com/kubeflow/pipelines/blob/master/docs/config/volume-support.md
### What happened:
Since the PVC is in namespace `admin` and the `ml-pipeline-ui` is in namespace `kubeflow`, the PVC cannot be found and
the patch is unsuccessful:
```bash
$ kubectl rollout status deploy/ml-pipeline-ui -n kubeflow
Waiting for deployment "ml-pipeline-ui" rollout to finish: 1 old replicas are pending termination...
# analysis
$ kubectl describe pod ml-pipeline-ui-76fcd74b66-j47pp -n kubeflow
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 45s (x7 over 9m42s) default-scheduler persistentvolumeclaim "pvc-tfx," not found
```
How could this be solved?
### Environment:
<!-- Please fill in those that seem relevant. -->
How did you deploy Kubeflow Pipelines (KFP)?
Full Kubeflow 1.2 deployment on an on-premise K8s cluster
KFP version: build version v1beta1
KFP SDK version:
```python
kfp 1.3.0
kfp-pipeline-spec 0.1.3.1
kfp-server-api 1.3.0
```
### Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind bug
/area frontend
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
/area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5031/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5029 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5029/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5029/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5029/events | https://github.com/kubeflow/pipelines/issues/5029 | 793,062,410 | MDU6SXNzdWU3OTMwNjI0MTA= | 5,029 | Can't access the endpoint generated at the end of the process | {
"login": "Juggernaut1997",
"id": 38181043,
"node_id": "MDQ6VXNlcjM4MTgxMDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/38181043?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Juggernaut1997",
"html_url": "https://github.com/Juggernaut1997",
"followers_url": "https://api.github.com/users/Juggernaut1997/followers",
"following_url": "https://api.github.com/users/Juggernaut1997/following{/other_user}",
"gists_url": "https://api.github.com/users/Juggernaut1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Juggernaut1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Juggernaut1997/subscriptions",
"organizations_url": "https://api.github.com/users/Juggernaut1997/orgs",
"repos_url": "https://api.github.com/users/Juggernaut1997/repos",
"events_url": "https://api.github.com/users/Juggernaut1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/Juggernaut1997/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1682717392,
"node_id": "MDU6TGFiZWwxNjgyNzE3Mzky",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/question",
"name": "kind/question",
"color": "2515fc",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | closed | false | null | [] | null | [
"@Juggernaut1997 Do you use full Kubeflow or AI platform pipelines?\r\n\r\nYou may take a look at KFServing, I think they provide mechanism to expose generic container endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-01-25T06:08:00 | 2022-04-28T18:00:23 | 2022-04-28T18:00:23 | NONE | null | Hi team,
I am a ML Engineer and right now working on MLOps on GCP. So for that I developed my ML model and used `kfp functions` to deploy my model to Kubeflow and I successfully did that. Adding to that, I have also developed a simple UI to access my ML model using flask.
So for now, my application consist of 3 steps:
1. Pre-process
2. Train
3. Predict
In the end of predict, I had an endpoint generated, which is some localhost link but as this is generated inside kubeflow I went to workloads on GCP and tried to expose the pod to a LoadBalancing port but that is not working. It gets exposed to that port but I am not able to access that endpoint.
Can you guys help me with how can I access the endpoint?
Thank you so much in advance | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5029/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5026 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5026/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5026/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5026/events | https://github.com/kubeflow/pipelines/issues/5026 | 792,045,124 | MDU6SXNzdWU3OTIwNDUxMjQ= | 5,026 | Nvidia A100: GPU Slice definition not supported via Pipeline DSL | {
"login": "kd303",
"id": 16409185,
"node_id": "MDQ6VXNlcjE2NDA5MTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/16409185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kd303",
"html_url": "https://github.com/kd303",
"followers_url": "https://api.github.com/users/kd303/followers",
"following_url": "https://api.github.com/users/kd303/following{/other_user}",
"gists_url": "https://api.github.com/users/kd303/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kd303/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kd303/subscriptions",
"organizations_url": "https://api.github.com/users/kd303/orgs",
"repos_url": "https://api.github.com/users/kd303/repos",
"events_url": "https://api.github.com/users/kd303/events{/privacy}",
"received_events_url": "https://api.github.com/users/kd303/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | open | false | null | [] | null | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I'm having this problem too - we've got a 4 A100 cluster with heterogeneous GPU sizes. I might use the same workaround, but I'll let you know if I come up with something different.",
"I think the problem is mainly to do with KFP SDK, where the method ``set_gpu_limit()`` has resource type hard-coded inside, so unless that particular issue is not fixed it will not not work,\r\n\r\nBasically line number 421 in [sdk ``set_gpu_limit()``](https://github.com/kubeflow/pipelines/blob/dec03067ca1f89f1ca23c7397830d60201448fa6/sdk/python/kfp/dsl/_container_op.py#L394) needs to change to include regular expression and not check exact match.. may be that could be solution. ",
"Yeah, funny thing is if you change that it still fails because of this line: https://github.com/kubeflow/pipelines/blob/83ecb97fee69932db3250a788d38ad515289840d/sdk/python/kfp/compiler/_op_to_template.py#L318 which has a similar problem.\r\n\r\nThanks for pointing that out though, I’ll add that change to my PR later today.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 2021-01-22T14:24:16 | 2022-03-02T21:05:11 | null | NONE | null | ### What steps did you take:
I created a simple test pipeline to check the Nvidia slicing definitions, using following code
```
train = dsl.ContainerOp(
name='Train Model',
image='tensorflow:v1',
).add_node_selector_constraint("nvidia.com/gpu.product", "A100-SXM4-40GB MIG 1g.5gb")
train.set_gpu_limit(1)
```
We have 3 node A100 setup on which the Kubeflow version 1.1 deployed and we are following [Mixed strategy ], (https://developer.nvidia.com/blog/getting-kubernetes-ready-for-the-a100-gpu-with-multi-instance-gpu/) basically the MiG mode is enabled.
### What happened:
Compiling and deploying the YAML definition, Pipeline run is showing an error, please see the attached screenshot
![kubeflowerrormig](https://user-images.githubusercontent.com/16409185/105500237-a11afa80-5ce8-11eb-8105-4e50c9f669b8.JPG)
Changing the DSL definition
```
.add_node_selector_constraint("nvidia.com/gpu.product", "A100-SXM4-40GB MIG 1g.5gb")
trainOp.set_gpu_limit(1)
```
Creates following error, where the node is not able to be scheduled, (this may work if we dont create the MiG definition)
![a100error2](https://user-images.githubusercontent.com/16409185/105501886-c9a3f400-5cea-11eb-8e93-bf3e1f3547ff.JPG)
Then changing to last third option where the DSL , compilation error gets thrown, which basically complains about the vendor name to be Nvidia
```
.set_gpu_limit(1, "nvidia.com/mig-3g.20gb")
train.add_node_selector_constraint("nvidia.com/gpu.product","A100-SXM4-40GB")
File "D:\Apps\Anaconda3\lib\site-packages\kfp\dsl\_container_op.py", line 303, in set_gpu_limit
raise ValueError('vendor can only be nvidia or amd.')
ValueError: vendor can only be nvidia or amd.
```
When I changed the compiled YAML file to change I could run the pipeline successfully.
```
resources:
limits:
nvidia.com/mig-3g.20gb: 1
```
### What did you expect to happen:
### Environment:
<!-- Please fill in those that seem relevant. -->
How did you deploy Kubeflow Pipelines (KFP)?
<!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). -->
KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. --> 1.0
KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp --> 1.0.4
### Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind bug
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
/area sdk
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5026/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5022 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5022/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5022/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5022/events | https://github.com/kubeflow/pipelines/issues/5022 | 791,328,310 | MDU6SXNzdWU3OTEzMjgzMTA= | 5,022 | metadata-ui pod in CrashLoopBackOff status after making it to run as non root group | {
"login": "shruthingcp",
"id": 76464452,
"node_id": "MDQ6VXNlcjc2NDY0NDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/76464452?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shruthingcp",
"html_url": "https://github.com/shruthingcp",
"followers_url": "https://api.github.com/users/shruthingcp/followers",
"following_url": "https://api.github.com/users/shruthingcp/following{/other_user}",
"gists_url": "https://api.github.com/users/shruthingcp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shruthingcp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shruthingcp/subscriptions",
"organizations_url": "https://api.github.com/users/shruthingcp/orgs",
"repos_url": "https://api.github.com/users/shruthingcp/repos",
"events_url": "https://api.github.com/users/shruthingcp/events{/privacy}",
"received_events_url": "https://api.github.com/users/shruthingcp/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | closed | false | null | [] | null | [
"Metadata UI has been deprecated, the new metadata UI will be improved in KFP UI.\r\nDo you have the same problem with KFP UI?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-01-21T16:55:01 | 2022-04-28T18:00:25 | 2022-04-28T18:00:25 | NONE | null | ### What steps did you take:
Added securityContext to metadata-ui to make it run as non root user and non root user group. Deployed metadata-ui using below spec. metadata-ui pod is stuck in CrashLoopBackOff status and below error is shown in the logs
` internal/modules/cjs/loader.js:626
throw err;
^
Error: Cannot find module 'express'
Require stack:
- /server/dist/server.js
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:623:15)
at Function.Module._load (internal/modules/cjs/loader.js:527:27)
at Module.require (internal/modules/cjs/loader.js:681:19)
at require (internal/modules/cjs/helpers.js:16:16)
at Object.<anonymous> (/server/dist/server.js:52:15)
at Module._compile (internal/modules/cjs/loader.js:774:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:785:10)
at Module.load (internal/modules/cjs/loader.js:641:32)
at Function.Module._load (internal/modules/cjs/loader.js:556:12)
at Function.Module.runMain (internal/modules/cjs/loader.js:837:10) {
code: 'MODULE_NOT_FOUND',
requireStack: [ '/server/dist/server.js' ]
}
stream closed`
### What happened:
metadata-ui pod in CrashLoopBackOff status
### What did you expect to happen:
metadata-ui should be running as non root user & non root user group
### Environment:
- Kubeflow version: (version number can be found at the bottom left corner of the Kubeflow dashboard): v1.1.0
- kfctl version: (use `kfctl version`): v1.0-0-g94c35cf
- Kubernetes platform: minikube
- Kubernetes version: (use `kubectl version`): 1.15.9
### Anything else you would like to add:
Removed runAsGroup from securityContext and metadata-ui pod is running as expected. Unable to understand why runAsGroup is causing the express node module to be missed.
/kind bug
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5022/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5021 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5021/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5021/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5021/events | https://github.com/kubeflow/pipelines/issues/5021 | 790,799,922 | MDU6SXNzdWU3OTA3OTk5MjI= | 5,021 | 2021-1-21: presubmit failure "Insufficient regional quota to satisfy request: resource "DISKS_TOTAL_GB" ..." | {
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] | closed | false | {
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"/cc @hilcj \r\n/assign @Bobgy \r\nI'll try to fix this",
"It's duplicate of #2523, which we should really prioritize",
"I'm going to first run the workaround script to clean up unused disks quickly.",
"Finished clean up"
] | 2021-01-21T08:02:11 | 2021-01-26T07:42:04 | 2021-01-26T07:42:04 | CONTRIBUTOR | null | ### What steps did you take:
[A clear and concise description of what the bug is.]
### What happened:
test infra run out of quota: https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/5019/kubeflow-pipeline-sample-v2/1352162018584432640#1:build-log.txt%3A285
### What did you expect to happen:
### Environment:
<!-- Please fill in those that seem relevant. -->
How did you deploy Kubeflow Pipelines (KFP)?
<!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). -->
KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. -->
KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp -->
### Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind bug
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5021/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5020 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5020/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5020/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5020/events | https://github.com/kubeflow/pipelines/issues/5020 | 790,736,869 | MDU6SXNzdWU3OTA3MzY4Njk= | 5,020 | NOTICE: "Context retired without replacement" during migration to google-oss-robot | {
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting! Hopefully we can migrate to google-oss-robot soon",
"Thanks, in fact google-oss-robot is still spamming, so we need to keep it for some time",
"We have migrated to @google-oss-robot, there was an issue during the migration that caused all presubmit tests in existing PRs to succeed with message \"Context retired without replacement\". Please comment `/retest` on the PR to retry them.",
"the migration has succeeded, closing this issue"
] | 2021-01-21T06:21:15 | 2021-01-28T03:07:48 | 2021-01-26T07:45:01 | CONTRIBUTOR | null | ## UPDATE
We have migrated to @google-oss-robot, there was an issue during the migration that caused all presubmit tests in existing PRs to succeed with message "Context retired without replacement".
## Workaround
Please comment `/test all` on the PR to retry them.
============
original issues
For aws-kf-ci-bot: upstream issue - ~https://github.com/kubeflow/internal-acls/issues/354#issuecomment-764311315~ has already been resolved.
For google-oss-robot:
We are migrating to google-oss-prow, please ignore them during transition period. We'll disable k8s-ci-robot very soon.
ETA 1week
progress can be found on https://github.com/kubernetes/test-infra/issues/14343 | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5020/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5014 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5014/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5014/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5014/events | https://github.com/kubeflow/pipelines/issues/5014 | 789,897,955 | MDU6SXNzdWU3ODk4OTc5NTU= | 5,014 | Wrong cron format example | {
"login": "kim-sardine",
"id": 8458055,
"node_id": "MDQ6VXNlcjg0NTgwNTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8458055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kim-sardine",
"html_url": "https://github.com/kim-sardine",
"followers_url": "https://api.github.com/users/kim-sardine/followers",
"following_url": "https://api.github.com/users/kim-sardine/following{/other_user}",
"gists_url": "https://api.github.com/users/kim-sardine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kim-sardine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kim-sardine/subscriptions",
"organizations_url": "https://api.github.com/users/kim-sardine/orgs",
"repos_url": "https://api.github.com/users/kim-sardine/repos",
"events_url": "https://api.github.com/users/kim-sardine/events{/privacy}",
"received_events_url": "https://api.github.com/users/kim-sardine/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"This is resolved by: https://github.com/kubeflow/pipelines/pull/5028 @hilcj ",
"from `godoc.org` to `pkg.go.dev`! Thanks a lot"
] | 2021-01-20T11:33:55 | 2021-01-25T01:45:59 | 2021-01-25T01:45:58 | CONTRIBUTOR | null | ### What steps did you take:
Creating recurring run on kubeflow central dashboard
### What happened:
there is link for cron format guide - `Allow editing cron expression. ( format is specified here)` on page for creating recurring run. and this link takes me to [godoc-cron](https://godoc.org/github.com/robfig/cron#hdr-CRON_Expression_Format)
and it says -
Field name | Mandatory? | Allowed values | Allowed special characters
---------- | ---------- | -------------- | --------------------------
Minutes | Yes | 0-59 | * / , -
Hours | Yes | 0-23 | * / , -
Day of month | Yes | 1-31 | * / , - ?
Month | Yes | 1-12 or JAN-DEC | * / , -
Day of week | Yes | 0-6 or SUN-SAT | * / , - ?
there is no `Seconds` part.
but recurring run's cron needs `Seconds` at first position.
so what I expected : `0 * * * *` -> run every hours (from what [godoc-cron](https://godoc.org/github.com/robfig/cron#hdr-CRON_Expression_Format) says)
but what happened : run every minutes
So I think this link needs to be updated.
### Environment:
<!-- Please fill in those that seem relevant. -->
KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. -->
1.0.4
KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp -->
1.2.0
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5014/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5014/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5008 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5008/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5008/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5008/events | https://github.com/kubeflow/pipelines/issues/5008 | 789,221,698 | MDU6SXNzdWU3ODkyMjE2OTg= | 5,008 | Pipeline component fails with [object Object] as pod log | {
"login": "ShilpaGopal",
"id": 13718648,
"node_id": "MDQ6VXNlcjEzNzE4NjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/13718648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShilpaGopal",
"html_url": "https://github.com/ShilpaGopal",
"followers_url": "https://api.github.com/users/ShilpaGopal/followers",
"following_url": "https://api.github.com/users/ShilpaGopal/following{/other_user}",
"gists_url": "https://api.github.com/users/ShilpaGopal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShilpaGopal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShilpaGopal/subscriptions",
"organizations_url": "https://api.github.com/users/ShilpaGopal/orgs",
"repos_url": "https://api.github.com/users/ShilpaGopal/repos",
"events_url": "https://api.github.com/users/ShilpaGopal/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShilpaGopal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1682717392,
"node_id": "MDU6TGFiZWwxNjgyNzE3Mzky",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/question",
"name": "kind/question",
"color": "2515fc",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | closed | false | null | [] | null | [
"Seems like something UI related. Does the container produce logs? Is it just this pipeline/pipeline step that does this?",
"Yes @parthmishra container produces logs using python logger. The previous step is able to log currently but the next step fails. \r\n\r\nI tried the sample flip coin example(https://github.com/kubeflow/pipelines/blob/master/samples/core/condition/condition.py) which has multiple steps, it was able to log the output.\r\n\r\nI am not sure, I might be missing something! If its a UI related issue then it must affect all the pipelines with multiple steps right? ",
"I was able to solve this problem. Here are my findings. \r\n\r\nimport-utility.Dockerfile\r\n```\r\nFROM import-utility:1.0.0-23\r\nRUN mkdir import-utility\r\nCOPY /src /import-utility/src\r\n\r\nWORKDIR /import-utility\r\n\r\nENTRYPOINT [\"python\"]\r\n```\r\nMy pipeline step's image containers have WORKDIR set to /import-utility, I was writing the output file from step 1 in the path file_outputs={'output': '/import-utility/output.txt'}. \r\n\r\nBy changing it to write to the path file_outputs={'output': '/tmp/output.txt'}, step 2 was able to fetch the output successfully. \r\n\r\nI am not sure about the effect of writing the output file inside the container's working dir v/s outside of it, on <your_previous_dsl.ContainerOp>.output! any input on this is appreciated. ",
"/cc @chensun @Ark-kun \r\nDo you have an suggestions for @shilpagopal?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-01-19T17:37:40 | 2022-04-28T18:00:26 | 2022-04-28T18:00:26 | NONE | null | ### What steps did you take:
Created a pipeline with two stages, On success of 1st stage status will be published in 2nd stage.
![image](https://user-images.githubusercontent.com/13718648/105070966-ffa66580-5aa9-11eb-863b-d6ea791e2c55.png)
### What happened:
2nd stage always fails with exit code 255 and logs [object Object].
### Environment:
<!-- Please fill in those that seem relevant. -->
How did you deploy Kubeflow Pipelines (KFP)?
<!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). -->
Custom build is deployed on on-prem cluster with Istio and Keycloak integration
KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. --> 1.2.0
KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp -->1.1.1
### Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind bug
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5008/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5007 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5007/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5007/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5007/events | https://github.com/kubeflow/pipelines/issues/5007 | 789,159,609 | MDU6SXNzdWU3ODkxNTk2MDk= | 5,007 | Kubeflow-pipeline-postsubmit-integration-test failure | {
"login": "hilcj",
"id": 17188784,
"node_id": "MDQ6VXNlcjE3MTg4Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/17188784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hilcj",
"html_url": "https://github.com/hilcj",
"followers_url": "https://api.github.com/users/hilcj/followers",
"following_url": "https://api.github.com/users/hilcj/following{/other_user}",
"gists_url": "https://api.github.com/users/hilcj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hilcj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hilcj/subscriptions",
"organizations_url": "https://api.github.com/users/hilcj/orgs",
"repos_url": "https://api.github.com/users/hilcj/repos",
"events_url": "https://api.github.com/users/hilcj/events{/privacy}",
"received_events_url": "https://api.github.com/users/hilcj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 930619511,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTE=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/priority/p0",
"name": "priority/p0",
"color": "db1203",
"default": false,
"description": ""
},
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] | closed | false | {
"login": "hongye-sun",
"id": 43763191,
"node_id": "MDQ6VXNlcjQzNzYzMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/43763191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hongye-sun",
"html_url": "https://github.com/hongye-sun",
"followers_url": "https://api.github.com/users/hongye-sun/followers",
"following_url": "https://api.github.com/users/hongye-sun/following{/other_user}",
"gists_url": "https://api.github.com/users/hongye-sun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hongye-sun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hongye-sun/subscriptions",
"organizations_url": "https://api.github.com/users/hongye-sun/orgs",
"repos_url": "https://api.github.com/users/hongye-sun/repos",
"events_url": "https://api.github.com/users/hongye-sun/events{/privacy}",
"received_events_url": "https://api.github.com/users/hongye-sun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "hongye-sun",
"id": 43763191,
"node_id": "MDQ6VXNlcjQzNzYzMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/43763191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hongye-sun",
"html_url": "https://github.com/hongye-sun",
"followers_url": "https://api.github.com/users/hongye-sun/followers",
"following_url": "https://api.github.com/users/hongye-sun/following{/other_user}",
"gists_url": "https://api.github.com/users/hongye-sun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hongye-sun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hongye-sun/subscriptions",
"organizations_url": "https://api.github.com/users/hongye-sun/orgs",
"repos_url": "https://api.github.com/users/hongye-sun/repos",
"events_url": "https://api.github.com/users/hongye-sun/events{/privacy}",
"received_events_url": "https://api.github.com/users/hongye-sun/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@Bobgy ",
"@hilcj do you need help from other members?\nIt's oncall's responsibility to do the initial investigations.\n\nBut feel free to delegate to us if you think it's out of your knowledge range",
"@Bobgy actually I just want to ask you if this is an known issue. Because the failure has started at least two weeks ago and previous oncalls may have already reported it.\r\n\r\nIf not, I'll do the investigation and get back to you.\r\n\r\nBtw do you know where we keep track of the live issues? Seems the oncalls handover note was not updated since Dec 18, and no update on the kfp oncalls book - live issues since my last oncall in Nov.",
"It should be the handover notes, but I guess @Ark-kun and @IronPan didn't take them.\n\nDid you see this problem before?",
"Error is from dataflow sample test, and this is related to a recent fix I made for dataflow component. Will send a fix shortly.",
"Postsubmit is still red with multiple errors. Reopen this, and I'll investigate one by one shortly.",
"There're other build failures similar to the one I fixed above. Reopen and I'll make fixes shortly.",
"Awesome, thank you @chensun!",
"JFYI:\r\nThe latest issue with the deprecated dataflow component container build was caused by pip 21.0 dropping support for python2. https://github.com/pypa/pip/issues/6148 Those container images were dynamically installing latest version of pip which cause the build to start failing. ",
"https://oss-prow.knative.dev/view/gs/oss-prow/logs/kubeflow-pipeline-postsubmit-standalone-component-test/1354926169316659200\r\n\r\nLatest test error was:\r\n```\r\nAdding pip 21.0 to easy-install.pth file\r\nInstalling pip script to /usr/local/bin\r\nInstalling pip2.7 script to /usr/local/bin\r\nInstalling pip2 script to /usr/local/bin\r\n\r\nInstalled /usr/local/lib/python2.7/dist-packages/pip-21.0-py2.7.egg\r\nProcessing dependencies for pip\r\nFinished processing dependencies for pip\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/pip\", line 11, in <module>\r\n load_entry_point('pip==21.0', 'console_scripts', 'pip')()\r\n File \"/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py\", line 561, in load_entry_point\r\n return get_distribution(dist).load_entry_point(group, name)\r\n File \"/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py\", line 2631, in load_entry_point\r\n return ep.load()\r\n File \"/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py\", line 2291, in load\r\n return self.resolve()\r\n File \"/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py\", line 2297, in resolve\r\n module = __import__(self.module_name, fromlist=['__name__'], level=0)\r\n File \"/usr/local/lib/python2.7/dist-packages/pip-21.0-py2.7.egg/pip/_internal/cli/main.py\", line 60\r\n sys.stderr.write(f\"ERROR: {exc}\")\r\n\r\n```\r\n\r\nand this is due to the content of gs://ml-pipeline/sample-pipeline/xgboost/initialization_actions.sh\r\n\r\n```\r\n#!/bin/bash -e\r\n\r\n# Copyright 2018 Google LLC\r\n#\r\n# Licensed under the Apache License, Version 2.0 (the \"License\");\r\n# you may not use this file except in compliance with the License.\r\n# You may obtain a copy of the License at\r\n#\r\n# http://www.apache.org/licenses/LICENSE-2.0\r\n#\r\n# Unless required by applicable law or agreed to in writing, software\r\n# distributed under the License is distributed on an \"AS IS\" BASIS,\r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n# See the License for the specific language governing permissions and\r\n# limitations under the License.\r\n\r\n# Initialization actions to run in dataproc setup.\r\n# The script will be run on each node in a dataproc cluster.\r\n\r\neasy_install pip\r\npip install tensorflow==1.4.1\r\npip install pandas==0.18.1\r\n```\r\nI'm going to update its content to use Python 3.",
"After #5062 and updating gs://ml-pipeline/sample-pipeline/xgboost/initialization_actions.sh, the previous error is fixed.\r\n\r\nNow we got a runtime error submitting Dataproc spark job:\r\n```\r\nException in thread \"main\" java.lang.NoClassDefFoundError: org/apache/spark/ml/util/MLWritable$class\r\n\tat ml.dmlc.xgboost4j.scala.spark.XGBoostEstimator.<init>(XGBoostEstimator.scala:38)\r\n\tat ml.dmlc.xgboost4j.scala.spark.XGBoostEstimator.<init>(XGBoostEstimator.scala:42)\r\n\tat ml.dmlc.xgboost4j.scala.spark.XGBoost$.trainWithDataFrame(XGBoost.scala:182)\r\n\tat ml.dmlc.xgboost4j.scala.example.spark.XGBoostTrainer$.main(XGBoostTrainer.scala:120)\r\n\tat ml.dmlc.xgboost4j.scala.example.spark.XGBoostTrainer.main(XGBoostTrainer.scala)\r\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\r\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\r\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\r\n\tat java.lang.reflect.Method.invoke(Method.java:498)\r\n\tat org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)\r\n\tat org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845)\r\n\tat org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)\r\n\tat org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)\r\n\tat org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)\r\n\tat org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)\r\n\tat org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)\r\n\tat org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)\r\nCaused by: java.lang.ClassNotFoundException: org.apache.spark.ml.util.MLWritable$class\r\n\tat java.net.URLClassLoader.findClass(URLClassLoader.java:382)\r\n\tat java.lang.ClassLoader.loadClass(ClassLoader.java:418)\r\n\tat java.lang.ClassLoader.loadClass(ClassLoader.java:351)\r\n\t... 17 more\r\n21/01/30 03:22:56 INFO org.spark_project.jetty.server.AbstractConnector: Stopped Spark@72ab05ed{HTTP/1.1,[http/1.1]}{0.0.0.0:0}\r\nJob output is complete\r\n```\r\n\r\nGuessing we need to update this package [1] to accommodate newer version of Spark that comes with Dataproc 1.5 image. \r\n\r\n[1] gs://ml-pipeline/sample-pipeline/xgboost/xgboost4j-example-0.8-SNAPSHOT-jar-with-dependencies.jar",
"Opened https://github.com/kubeflow/pipelines/issues/5089 to track the XGBoost issue, handing over the rest to @hongye-sun .",
"Postsubmit is now healthy"
] | 2021-01-19T16:15:51 | 2021-02-05T02:22:35 | 2021-02-05T02:22:35 | CONTRIBUTOR | null | KFP Oncall noticed [kubeflow-pipeline-postsubmit-integration-test](https://k8s-testgrid.appspot.com/sig-big-data#kubeflow-pipeline-postsubmit-integration-test) being failing.
/kind bug | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5007/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5006 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5006/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5006/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5006/events | https://github.com/kubeflow/pipelines/issues/5006 | 789,047,098 | MDU6SXNzdWU3ODkwNDcwOTg= | 5,006 | Kubeflow Multiuser Isolation for pipeline runs: Read-only access | {
"login": "kd303",
"id": 16409185,
"node_id": "MDQ6VXNlcjE2NDA5MTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/16409185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kd303",
"html_url": "https://github.com/kd303",
"followers_url": "https://api.github.com/users/kd303/followers",
"following_url": "https://api.github.com/users/kd303/following{/other_user}",
"gists_url": "https://api.github.com/users/kd303/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kd303/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kd303/subscriptions",
"organizations_url": "https://api.github.com/users/kd303/orgs",
"repos_url": "https://api.github.com/users/kd303/repos",
"events_url": "https://api.github.com/users/kd303/events{/privacy}",
"received_events_url": "https://api.github.com/users/kd303/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
},
{
"id": 1289604862,
"node_id": "MDU6TGFiZWwxMjg5NjA0ODYy",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/authentication",
"name": "area/authentication",
"color": "d2b48c",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"I'm interested in something like this too, mostly to have a public namespace that everyone can see without having to manually add via contributors UI. Until support for RBAC groups in the Profile controller comes along, it's not easy. As for your use case, I haven't tried this out, but I'd suggest the following:\r\n- Apply a resource quota for GPUs on the non-shared profiles to disallow GPU (see the [docs](https://www.kubeflow.org/docs/components/multi-tenancy/getting-started/#managing-contributors-manually) on where to put these)\r\n- Create a custom Role with the specific permissions you need and add RoleBindings for your users in the training namespace\r\n\r\nside note: is this really a bug? seems like more of a feature and/or documentation request",
"What you requested is in scope of Issue #3513.\r\nIt will be released in the next Kubeflow release. So you can wait for it."
] | 2021-01-19T14:00:47 | 2021-01-29T01:03:05 | 2021-01-29T01:02:56 | NONE | null | Question:
Please note this as advised by in question, I am reopening this question as per suggestions in # [5510](https://github.com/kubeflow/kubeflow/issues/5510)
We are trying to create a separate namespace where the different runs can be submitted, I understand from the documentation currently that pipelines have restrictions about namespaces, however our use-case is the runs where this is happening we want to allocate the GPU resources in that namespace, so users can submit their jobs to a queue and we deploy these jobs from a queue as and when the resources are available.
My Questions are:
1. What is the way we can ensure the users have read-only access to above namespace (e.g. training namespace )? Is this possible? so that they are able to view their runs, status of it, logs and metrics generated (I am aware as of 1.1 pipelines are not isolated from users, for which we are okay to live with)
2. At the sametime they should not be able to create resources in that specific namespaces - for example Notebook servers, running other pipelines or deploy a kf-serving model (for this forum - consider pipeline runs)
3. Managing above scenario via contributors is not great as it will allow the access of all resource in the common namespace
We are trying to ensure we utilize the GPUs (with all the sharing disabled in Kubernetes, its becoming all the more required) in best possible manner and resources are not stuck to a specific team/individual..
Please provide any updates/link or direction using which we can solve this. Unfortunately the documentation is not great on this aspect so I am not able to proceed (I have read through - [Issue # 3513](https://github.com/kubeflow/pipelines/issues/3513) it odes not look complete yet.)
KFP version: 1.0
Kubeflow version 1.1
KFP SDK version: NA
### Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind bug
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5006/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5006/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5005 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5005/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5005/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5005/events | https://github.com/kubeflow/pipelines/issues/5005 | 788,044,961 | MDU6SXNzdWU3ODgwNDQ5NjE= | 5,005 | pipeline step pod which come up with alpine image when cachedExecution is set don't run with restricted psp | {
"login": "orugantichetan",
"id": 69839506,
"node_id": "MDQ6VXNlcjY5ODM5NTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/69839506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orugantichetan",
"html_url": "https://github.com/orugantichetan",
"followers_url": "https://api.github.com/users/orugantichetan/followers",
"following_url": "https://api.github.com/users/orugantichetan/following{/other_user}",
"gists_url": "https://api.github.com/users/orugantichetan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orugantichetan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orugantichetan/subscriptions",
"organizations_url": "https://api.github.com/users/orugantichetan/orgs",
"repos_url": "https://api.github.com/users/orugantichetan/repos",
"events_url": "https://api.github.com/users/orugantichetan/events{/privacy}",
"received_events_url": "https://api.github.com/users/orugantichetan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | closed | false | {
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"/assign @Ark-kun \r\n\r\nUnderstood the issue, that's indeed a limitation of KFP caching.\r\nNote, the feature is in Alpha stage, you can also disable it via https://www.kubeflow.org/docs/pipelines/caching/#disabling-caching-in-your-kubeflow-pipelines-deployment\r\n(the doc is slightly outdated, the name of that configuration should be `cache-webhook-{namespace}` instead.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-01-18T08:25:48 | 2022-04-28T18:00:25 | 2022-04-28T18:00:25 | NONE | null | ### What steps did you take:
installed kubeflow with kfp 1.0.4 manifest
### What happened:
pipeline step pod which come up with alpine image when cachedExecution is set don't run with restricted psp . Error: container has runAsNonRoot and image will run as root
Making the image name configurable through env var or workflow controller adding some default securitycontext to the pipeline step will be helpful .
[https://github.com/kubeflow/pipelines/blob/3b9fdff26be7d0f04563bbac8bc23807caa69ae2/backend/src/cache/server/mutation.go#L136](url) | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5005/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5013 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5013/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5013/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5013/events | https://github.com/kubeflow/pipelines/issues/5013 | 789,878,475 | MDU6SXNzdWU3ODk4Nzg0NzU= | 5,013 | Development friendly Kubeflow experience | {
"login": "JoshZastrow",
"id": 5170754,
"node_id": "MDQ6VXNlcjUxNzA3NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5170754?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoshZastrow",
"html_url": "https://github.com/JoshZastrow",
"followers_url": "https://api.github.com/users/JoshZastrow/followers",
"following_url": "https://api.github.com/users/JoshZastrow/following{/other_user}",
"gists_url": "https://api.github.com/users/JoshZastrow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoshZastrow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoshZastrow/subscriptions",
"organizations_url": "https://api.github.com/users/JoshZastrow/orgs",
"repos_url": "https://api.github.com/users/JoshZastrow/repos",
"events_url": "https://api.github.com/users/JoshZastrow/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoshZastrow/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1682717392,
"node_id": "MDU6TGFiZWwxNjgyNzE3Mzky",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/question",
"name": "kind/question",
"color": "2515fc",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | closed | false | null | [] | null | [
"@JoshZastrow Just to be clear, are you talking about Kubeflow as a whole or pipelines specifically? In regards to pipelines, it is possible to create python function based components rather than needing to create images (you do need to have a base image that contains the necessary dependencies such as pytorch for example). https://www.kubeflow.org/docs/pipelines/sdk/python-function-components/\r\n",
"https://github.com/kubeflow-kale/kale this might be useful. ",
"Hi @DavidSpek , ah yes I should have been more specific--I am talking more about Kubeflow Pipelines. \r\n\r\nSeems like even with python based functions, anything that gets imported [needs to exist on the image.](https://www.kubeflow.org/docs/pipelines/sdk/python-function-components/#using-and-installing-python-packages)\r\n\r\nFor fast development--perhaps the way to go is make every single function in the application a component. This is just a little hard to adopt for an existing python project that already has its own packages, modules, functions and classes. \r\n\r\nexample:\r\n```\r\nsrc\r\n -preprocess\r\n -scalers.py\r\n -encoders.py\r\n -setup.py\r\ncomponents\r\n -preprocessing.py\r\npipeline.py\r\n```\r\n\r\nThe pipeline would be built from components, but there's application code in `src` being actively developed. There could be many existing functions and classes in there that are used in the components. To test a change in src against the pipeline (for say a new experiment), I don't see a way of running the pipeline without building a new image that has a copy of the latest code change, then once it's uploaded to a docker registry, submitting a new pipeline that points to this version (not hard if we go with `latest`), then executing the pipeline on Kubeflow and seeing what the logs say. \r\n\r\n@munagekar ah yeah I like Kale! This could be a very cool tool (and a big notebook user myself) but the devs on my team actually prefer to develop the pipeline in a `.py` script and keep logic in local modules. 🤷🏻 \r\n",
"/area pipelines\r\nPing @Bobgy. Seeing as this is related to Pipelines specifically maybe it can be moved to the kubeflow/pipelines repo. ",
"> I don't see a way of running the pipeline without building a new image that has a copy of the latest code change, then once it's uploaded to a docker registry, submitting a new pipeline that points to this version (not hard if we go with latest), then executing the pipeline on Kubeflow and seeing what the logs say.\r\n\r\nThis is exactly what we implemented in my organization. We use git tags instead of latest. ",
"Some documentation you can refer to: https://cloud.google.com/solutions/machine-learning/architecture-for-mlops-using-tfx-kubeflow-pipelines-and-cloud-build\r\n\r\nThere's a CI/CD pipeline needed to deploy the CT (continuous training) pipeline that runs in KFP.",
"Looks like this open PR would help with this: https://github.com/kubeflow/pipelines/pull/4983",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-01-15T19:56:24 | 2022-04-28T18:00:14 | 2022-04-28T18:00:14 | NONE | null | /kind feature
**Why you need this feature:**
Say I have a local package containing my application logic (i.e cleaning, feature generation, ML model training, etc..). This local package contains modules and functions used in my component.
I want to make changes to the application logic (i.e change a feature scaling method), then run my pipeline and 1) make sure the pipeline works or 2) see an improvement in my offline metrics.
My component image needs to have all the dependencies on the image, so this seems to mean that if I want to run my kubeflow pipeline with new code, I need to re-build and submit an image each time. This is a pretty slow process, and prevents us from wanting to make smaller components (better to develop pipelines in Python and run them as a bigger component via a CLI command).
I'm imagining one solution with a local Kubeflow instance, that has the component images pointing to locally built docker images that have the local application code mounted, you can get a much faster iteration cycle.
Is there a better way to develop faster with Kubeflow? It says it's experimentation friendly, but I haven't felt that from working with Kubeflow so far (it is nice that it has experiment management/tracking in the UI though!). I don't feel like I can swap my current experimentation workflow out for Kubeflow.
Maybe a user guide on developing locally could be a good solution? Something equivalent to `pip install -e .` for Kubeflow components would be great!
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5013/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/5000 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/5000/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/5000/comments | https://api.github.com/repos/kubeflow/pipelines/issues/5000/events | https://github.com/kubeflow/pipelines/issues/5000 | 786,288,727 | MDU6SXNzdWU3ODYyODg3Mjc= | 5,000 | Kubeflow installation error 1.2 | {
"login": "ajinkya933",
"id": 17012391,
"node_id": "MDQ6VXNlcjE3MDEyMzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/17012391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ajinkya933",
"html_url": "https://github.com/ajinkya933",
"followers_url": "https://api.github.com/users/ajinkya933/followers",
"following_url": "https://api.github.com/users/ajinkya933/following{/other_user}",
"gists_url": "https://api.github.com/users/ajinkya933/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ajinkya933/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ajinkya933/subscriptions",
"organizations_url": "https://api.github.com/users/ajinkya933/orgs",
"repos_url": "https://api.github.com/users/ajinkya933/repos",
"events_url": "https://api.github.com/users/ajinkya933/events{/privacy}",
"received_events_url": "https://api.github.com/users/ajinkya933/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 930619528,
"node_id": "MDU6TGFiZWw5MzA2MTk1Mjg=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/testing",
"name": "area/testing",
"color": "00daff",
"default": false,
"description": ""
},
{
"id": 930619542,
"node_id": "MDU6TGFiZWw5MzA2MTk1NDI=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/engprod",
"name": "area/engprod",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"I solved this issue by changing the default region:\r\n\r\n`aws configure set default.region us-west-2`\r\n\r\nThe error is because kfctl completely ignores the region set in the .yaml file and just uses the aws cli default."
] | 2021-01-14T20:11:52 | 2021-01-15T05:14:04 | 2021-01-15T05:14:04 | NONE | null | ### What steps did you take:
[A clear and concise description of what the bug is.]
I setup cluster in me-south-1 region with 2xr5.xlarge instances. I follow docs here: https://www.kubeflow.org/docs/aws/deploy/install-kubeflow/
### What happened:
Error I get is:
```
Error: failed to apply: (kubeflow.error): Code 500 with message: coordinator Apply failed for aws: (kubeflow.error): Code 400 with message: IAM for Service Account is not supported on non-EKS cluster <nil>
kfctl exited with error: failed to apply: (kubeflow.error): Code 500 with message: coordinator Apply failed for aws: (kubeflow.error): Code 400 with message: IAM for Service Account is not supported on non-EKS cluster <nil>
```
### What did you expect to happen:
Kubeflow should have successfully installed
### Environment:
<!-- Please fill in those that seem relevant. -->
How did you deploy Kubeflow Pipelines (KFP)?
<!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). -->
KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. -->
KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp -->
kfp 1.2.0
kfp-pipeline-spec 0.1.3.1
kfp-server-api 1.2.0
### Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind bug
/area backend
/area testing
/area engprod
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/5000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/5000/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/4999 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/4999/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/4999/comments | https://api.github.com/repos/kubeflow/pipelines/issues/4999/events | https://github.com/kubeflow/pipelines/issues/4999 | 786,239,231 | MDU6SXNzdWU3ODYyMzkyMzE= | 4,999 | Discuss dependency update strategy of KFP | {
"login": "capri-xiyue",
"id": 52932582,
"node_id": "MDQ6VXNlcjUyOTMyNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/52932582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/capri-xiyue",
"html_url": "https://github.com/capri-xiyue",
"followers_url": "https://api.github.com/users/capri-xiyue/followers",
"following_url": "https://api.github.com/users/capri-xiyue/following{/other_user}",
"gists_url": "https://api.github.com/users/capri-xiyue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/capri-xiyue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/capri-xiyue/subscriptions",
"organizations_url": "https://api.github.com/users/capri-xiyue/orgs",
"repos_url": "https://api.github.com/users/capri-xiyue/repos",
"events_url": "https://api.github.com/users/capri-xiyue/events{/privacy}",
"received_events_url": "https://api.github.com/users/capri-xiyue/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1682717377,
"node_id": "MDU6TGFiZWwxNjgyNzE3Mzc3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/discussion",
"name": "kind/discussion",
"color": "ecfc15",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"fyi @NikeNano @Bobgy ",
"There are several main areas that might need updating:\r\n\r\n* Python SDK deps\r\n* Backend Go deps\r\n* Kubernetes deps (e.g. Argo)\r\n* Deps of micro services (Metadata Writer, Visualization Server, etc.)\r\n\r\nFor python (SDK and Visualizations) we already have update scripts.",
"And\r\n* frontend deps\r\n* MLMD client/server version\r\n\r\n+1 for building scripts that automate upgrading dependencies so that we try and test an upgrade more often.",
"FYI, I've created an issue before with similar content: https://github.com/kubeflow/pipelines/issues/4682\r\n@capri-xiyue shall we consolidate the two issues?",
"closed this since it‘s a duplicate of #4682 "
] | 2021-01-14T19:05:19 | 2021-01-19T19:36:13 | 2021-01-19T19:36:13 | CONTRIBUTOR | null | Currently, we do not usually update dependencies, so each time we update, they have already been pretty old and many things could be breaking after some dependencies are updated.
It's worth discussing the dependency update strategy of KFP.
Looks like there is no script or doc related to updating dependency. We may need to write a script like https://github.com/knative/eventing/blob/master/hack/update-deps.sh and write some docs related to updating dependency. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/4999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/4999/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/4997 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/4997/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/4997/comments | https://api.github.com/repos/kubeflow/pipelines/issues/4997/events | https://github.com/kubeflow/pipelines/issues/4997 | 785,995,577 | MDU6SXNzdWU3ODU5OTU1Nzc= | 4,997 | Pipeline runs are disappearing | {
"login": "kd303",
"id": 16409185,
"node_id": "MDQ6VXNlcjE2NDA5MTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/16409185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kd303",
"html_url": "https://github.com/kd303",
"followers_url": "https://api.github.com/users/kd303/followers",
"following_url": "https://api.github.com/users/kd303/following{/other_user}",
"gists_url": "https://api.github.com/users/kd303/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kd303/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kd303/subscriptions",
"organizations_url": "https://api.github.com/users/kd303/orgs",
"repos_url": "https://api.github.com/users/kd303/repos",
"events_url": "https://api.github.com/users/kd303/events{/privacy}",
"received_events_url": "https://api.github.com/users/kd303/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
},
{
"id": 2710158147,
"node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info",
"name": "needs more info",
"color": "DBEF12",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"Hello @kd303, I need more information in order to reproduce this issue. Based on the description, this happens when Pods are getting initialized. But Pods should be up and running when triggering a pipeline run. Did you uninstall KFP instance and redeploy before the issue happens?\r\n\r\nTo further investigate this issue, would you like to answer the followings:\r\n1. Reproduce steps in more detail: What tools you are using for local cluster deployment? What steps have been taken from beginning? \r\n2. Does it happen to all pipelines you create, or does it happen to specific pipeline? Can you share a sample of pipeline for reproducing the issue?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n",
"After lot of runs and debugging we realize that 'delete run' has been triggering. I am documenting this for anyone else.\r\n\r\nWe have been using APIs to automtically running our pipeline as well checking the status of the same. Now when the runs are longer then status check intervals, we saw in our code we are authenticating the user to make the API call every single time and when second time the authentication is done to check the status, it seems the runs were disappearing. for now we have changed the design to use Exit handlers for status updates (to an external tool where the dashboards are created for users) instead of using the status check APIs. \r\n\r\nJust documenting for others in case they face similar issue, although I am bit puzzled at this point as to why delete run has been triggering second time."
] | 2021-01-14T13:38:00 | 2022-05-04T06:15:09 | 2022-04-29T03:59:40 | NONE | null | ### What steps did you take:
Pipeline runs disappears when executing pipeline using SDK. When we run the pipeline for a specific experiments, we are able to see the logs appearing in pods as well the runid (UUID) is also getting generated. I have done tracking in mysql/mlpipeline/run_details table and can also see run appearing there.
However in some cases after above observations the run gets disapper from mysql and UI and nothing appearing in the UI
### What happened:
### What did you expect to happen:
Usually this is observed when the Pods are getting initialized, and for same pipeline the run works sometime
### Environment:
<!-- Please fill in those that seem relevant. -->
KFP 1.1 on a local Kubernetes cluster
How did you deploy Kubeflow Pipelines (KFP)?
<!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). -->
Full kubeflow deployment, on premise cluster
KFP version: 1.0 <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. -->
KFP SDK version: 1. <!-- Please attach the output of this shell command: $pip list | grep kfp -->
### Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind bug
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
/area backend
<!--
// /area frontend
// /area sdk
// /area testing
// /area engprod
-->
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/4997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/4997/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/4991 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/4991/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/4991/comments | https://api.github.com/repos/kubeflow/pipelines/issues/4991/events | https://github.com/kubeflow/pipelines/issues/4991 | 785,595,881 | MDU6SXNzdWU3ODU1OTU4ODE= | 4,991 | Tests - Marketplace verification does not fail on errors | {
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 930619528,
"node_id": "MDU6TGFiZWw5MzA2MTk1Mjg=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/testing",
"name": "area/testing",
"color": "00daff",
"default": false,
"description": ""
},
{
"id": 1606220157,
"node_id": "MDU6TGFiZWwxNjA2MjIwMTU3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/deployment/marketplace",
"name": "area/deployment/marketplace",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | closed | false | {
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
},
{
"login": "Bobgy",
"id": 4957653,
"node_id": "MDQ6VXNlcjQ5NTc2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bobgy",
"html_url": "https://github.com/Bobgy",
"followers_url": "https://api.github.com/users/Bobgy/followers",
"following_url": "https://api.github.com/users/Bobgy/following{/other_user}",
"gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions",
"organizations_url": "https://api.github.com/users/Bobgy/orgs",
"repos_url": "https://api.github.com/users/Bobgy/repos",
"events_url": "https://api.github.com/users/Bobgy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bobgy/received_events",
"type": "User",
"site_admin": false
},
{
"login": "numerology",
"id": 9604122,
"node_id": "MDQ6VXNlcjk2MDQxMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9604122?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/numerology",
"html_url": "https://github.com/numerology",
"followers_url": "https://api.github.com/users/numerology/followers",
"following_url": "https://api.github.com/users/numerology/following{/other_user}",
"gists_url": "https://api.github.com/users/numerology/gists{/gist_id}",
"starred_url": "https://api.github.com/users/numerology/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/numerology/subscriptions",
"organizations_url": "https://api.github.com/users/numerology/orgs",
"repos_url": "https://api.github.com/users/numerology/repos",
"events_url": "https://api.github.com/users/numerology/events{/privacy}",
"received_events_url": "https://api.github.com/users/numerology/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"It seems all of the findings are warnings though, and the verification status is PASSED",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 2021-01-14T02:19:58 | 2021-08-22T07:04:04 | 2021-08-22T07:04:04 | CONTRIBUTOR | null | https://storage.googleapis.com/kubernetes-jenkins/logs/kubeflow-pipeline-postsubmit-mkp-e2e-test/1346987369131151360/build-log.txt
```
Step #1 - "verify": === SCHEMA VALIDATION WARNING ===
Step #1 - "verify": WARNING: Schema is incompatible with the latest deployer, would fail with:
Step #1 - "verify":
Step #1 - "verify": Traceback (most recent call last):
Step #1 - "verify": File "/bin/validate_schema.py", line 35, in <module>
Step #1 - "verify": main()
Step #1 - "verify": File "/bin/validate_schema.py", line 31, in main
Step #1 - "verify": schema_values_common.load_schema_and_validate(args)
Step #1 - "verify": File "/bin/schema_values_common.py", line 65, in load_schema_and_validate
Step #1 - "verify": return load_schema(parsed_args).validate()
Step #1 - "verify": File "/bin/config_helper.py", line 170, in validate
Step #1 - "verify": self._x_google_marketplace._deployer_service_account.validate()
Step #1 - "verify": File "/bin/config_helper.py", line 1039, in validate
Step #1 - "verify": 'SERVICE_ACCOUNT must have a `description` '
Step #1 - "verify": config_helper.InvalidSchema: SERVICE_ACCOUNT must have a `description` explaining purpose and permission requirements. See docs: https://github.com/GoogleCloudPlatform/marketplace-k8s-app-tools/blob/master/docs/schema.md#type-service_account
Step #1 - "verify": ======== END OF WARNING =========
Step #1 - "verify": === ERROR SUMMARY ===
Step #1 - "verify":
Step #1 - "verify": Error events found in namespace "apptest-lf0xqn9s"
Step #1 - "verify": LAST SEEN TYPE REASON OBJECT MESSAGE
Step #1 - "verify": 4m26s Warning FailedMount pod/cache-server-d97bc8bdb-n9w2c MountVolume.SetUp failed for volume "webhook-tls-certs" : secret "webhook-server-tls" not found
Step #1 - "verify": 3m27s Warning FailedMount pod/cache-server-d97bc8bdb-n9w2c Unable to mount volumes for pod "cache-server-d97bc8bdb-n9w2c_apptest-lf0xqn9s(08120b68-63c6-48e3-b1c2-cda8cbe18f66)": timeout expired waiting for volumes to attach or mount for pod "apptest-lf0xqn9s"/"cache-server-d97bc8bdb-n9w2c". list of unmounted volumes=[webhook-tls-certs]. list of unattached volumes=[webhook-tls-certs kubeflow-pipelines-cache-token-njs8b]
Step #1 - "verify": 2m17s Warning BackOff pod/metadata-grpc-deployment-577c66c4dd-tvm5n Back-off restarting failed container
Step #1 - "verify": 5m29s Warning FailedMount pod/proxy-agent-795bfdc898-lt4g9 MountVolume.SetUp failed for volume "proxy-agent-runner-token-hphss" : couldn't propagate object cache: timed out waiting for the condition
Step #1 - "verify": 5m28s Warning FailedMount pod/workflow-controller-87f56f879-df6fj MountVolume.SetUp failed for volume "argo-token-v5l4z" : couldn't propagate object cache: timed out waiting for the condition
Step #1 - "verify": =====================
Step #1 - "verify":
Step #1 - "verify": === VERIFICATION STATUS ===
Step #1 - "verify": PASSED
Step #1 - "verify": ===========================
Finished Step #1 - "verify"
``` | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/4991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/4991/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/4987 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/4987/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/4987/comments | https://api.github.com/repos/kubeflow/pipelines/issues/4987/events | https://github.com/kubeflow/pipelines/issues/4987 | 785,248,259 | MDU6SXNzdWU3ODUyNDgyNTk= | 4,987 | Add clear instructions for authorization code / token | {
"login": "susan-shu-c",
"id": 23287722,
"node_id": "MDQ6VXNlcjIzMjg3NzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/23287722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/susan-shu-c",
"html_url": "https://github.com/susan-shu-c",
"followers_url": "https://api.github.com/users/susan-shu-c/followers",
"following_url": "https://api.github.com/users/susan-shu-c/following{/other_user}",
"gists_url": "https://api.github.com/users/susan-shu-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/susan-shu-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susan-shu-c/subscriptions",
"organizations_url": "https://api.github.com/users/susan-shu-c/orgs",
"repos_url": "https://api.github.com/users/susan-shu-c/repos",
"events_url": "https://api.github.com/users/susan-shu-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/susan-shu-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-01-13T16:23:31 | 2021-01-13T18:30:40 | 2021-01-13T18:30:40 | CONTRIBUTOR | null | I was following [this official example](https://colab.research.google.com/github/kubeflow/website/blob/master/content/en/docs/pipelines/sdk/python-function-components.ipynb) to submit a pipeline via Jupyter or Python script
I filled in the client id and credentials [following these instructions](https://www.kubeflow.org/docs/gke/pipelines/authentication-sdk/#connecting-to-kubeflow-pipelines-in-a-full-kubeflow-deployment), but when I try to [compile the pipeline](https://www.kubeflow.org/docs/pipelines/sdk/build-component/#compile-the-pipeline) with
```
dsl-compile --py [path/to/python/file] --output [path/to/output/tar.gz]
```
The resulting prompt doesn't make it evident that the url can be copy pasted into the browser, to get the Authorization code, and I got stuck on it for a while. Looking back it's kind of an Occam's razor thing, but it could have saved a lot of time with even simple instructions "copy paste this url"
![Screen Shot 2021-01-12 at 5 46 15 PM copy](https://user-images.githubusercontent.com/23287722/104479213-2982f800-5591-11eb-81ca-e2cf4df92b4f.png)
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/4987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/4987/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/4986 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/4986/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/4986/comments | https://api.github.com/repos/kubeflow/pipelines/issues/4986/events | https://github.com/kubeflow/pipelines/issues/4986 | 785,247,170 | MDU6SXNzdWU3ODUyNDcxNzA= | 4,986 | Add more detail to authorization step | {
"login": "susan-shu-c",
"id": 23287722,
"node_id": "MDQ6VXNlcjIzMjg3NzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/23287722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/susan-shu-c",
"html_url": "https://github.com/susan-shu-c",
"followers_url": "https://api.github.com/users/susan-shu-c/followers",
"following_url": "https://api.github.com/users/susan-shu-c/following{/other_user}",
"gists_url": "https://api.github.com/users/susan-shu-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/susan-shu-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susan-shu-c/subscriptions",
"organizations_url": "https://api.github.com/users/susan-shu-c/orgs",
"repos_url": "https://api.github.com/users/susan-shu-c/repos",
"events_url": "https://api.github.com/users/susan-shu-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/susan-shu-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-01-13T16:22:11 | 2021-01-13T16:33:19 | 2021-01-13T16:33:19 | CONTRIBUTOR | null | I was following [this official example](https://colab.research.google.com/github/kubeflow/website/blob/master/content/en/docs/pipelines/sdk/python-function-components.ipynb) to submit a pipeline via Jupyter or Python script
I filled in the client id and credentials [following these instructions](https://www.kubeflow.org/docs/gke/pipelines/authentication-sdk/#connecting-to-kubeflow-pipelines-in-a-full-kubeflow-deployment), but when I try to [compile the pipeline](https://www.kubeflow.org/docs/pipelines/sdk/build-component/#compile-the-pipeline) with
```
dsl-compile --py [path/to/python/file] --output [path/to/output/tar.gz]
```
The resulting prompt doesn't make it evident that the url can be copy pasted into the browser, and I got stuck on it for a while. Looking back it's kind of an Occam's razor thing, but it could have saved a lot of time with even simple instructions "copy paste this url"
![Screen Shot 2021-01-12 at 5 46 15 PM copy](https://user-images.githubusercontent.com/23287722/104479213-2982f800-5591-11eb-81ca-e2cf4df92b4f.png)
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/4986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/4986/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/4980 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/4980/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/4980/comments | https://api.github.com/repos/kubeflow/pipelines/issues/4980/events | https://github.com/kubeflow/pipelines/issues/4980 | 784,640,292 | MDU6SXNzdWU3ODQ2NDAyOTI= | 4,980 | Error 1114: The table 'run_details' is full" | {
"login": "brtasavpatel",
"id": 38225409,
"node_id": "MDQ6VXNlcjM4MjI1NDA5",
"avatar_url": "https://avatars.githubusercontent.com/u/38225409?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brtasavpatel",
"html_url": "https://github.com/brtasavpatel",
"followers_url": "https://api.github.com/users/brtasavpatel/followers",
"following_url": "https://api.github.com/users/brtasavpatel/following{/other_user}",
"gists_url": "https://api.github.com/users/brtasavpatel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brtasavpatel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brtasavpatel/subscriptions",
"organizations_url": "https://api.github.com/users/brtasavpatel/orgs",
"repos_url": "https://api.github.com/users/brtasavpatel/repos",
"events_url": "https://api.github.com/users/brtasavpatel/events{/privacy}",
"received_events_url": "https://api.github.com/users/brtasavpatel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 930619516,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTY=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend",
"name": "area/frontend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1682717392,
"node_id": "MDU6TGFiZWwxNjgyNzE3Mzky",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/question",
"name": "kind/question",
"color": "2515fc",
"default": false,
"description": ""
},
{
"id": 2157634204,
"node_id": "MDU6TGFiZWwyMTU3NjM0MjA0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale",
"name": "lifecycle/stale",
"color": "bbbbbb",
"default": false,
"description": "The issue / pull request is stale, any activities remove this label."
}
] | closed | false | null | [] | null | [
"Which cloud platform do you deploy KFP? Is it part of a full Kubeflow installation?\r\n\r\nOur suggestion for productionizing a KFP instance is to use a cloud provider's managed SQL instance.\r\ne.g. in GCP we'd recommend using Cloud SQL, so that you'll be able to back up/resize/configure the SQL instance on your own",
"For KFP standalone, we have some documentation in https://github.com/kubeflow/pipelines/tree/master/manifests/kustomize/sample.",
"and how long did it take until your DB was full?",
"Immediate workaround could be resize the PV backing your mysql DB or delete old run records from the DB using KFP SDK/API",
"> Which cloud platform do you deploy KFP? Is it part of a full Kubeflow installation?\r\n> \r\n> Our suggestion for productionizing a KFP instance is to use a cloud provider's managed SQL instance.\r\n> e.g. in GCP we'd recommend using Cloud SQL, so that you'll be able to back up/resize/configure the SQL instance on your own\r\n\r\nI am on AWS, but we deploy Kubeflow by our self. It's a full Kubeflow Installation.",
"> and how long did it take until your DB was full?\r\n\r\nLittle over 2 months. but our usage increased this month only.",
"/cc @PatrickXYS \nFor providing more guidance",
"> > Which cloud platform do you deploy KFP? Is it part of a full Kubeflow installation?\r\n> > Our suggestion for productionizing a KFP instance is to use a cloud provider's managed SQL instance.\r\n> > e.g. in GCP we'd recommend using Cloud SQL, so that you'll be able to back up/resize/configure the SQL instance on your own\r\n> \r\n> I am on AWS, but we deploy Kubeflow by our self. It's a full Kubeflow Installation.\r\n\r\n@brtasavpatel Then it's not on EKS I think, are you utilizing MySQL or RDS on AWS? ",
"> > > Which cloud platform do you deploy KFP? Is it part of a full Kubeflow installation?\r\n> > > Our suggestion for productionizing a KFP instance is to use a cloud provider's managed SQL instance.\r\n> > > e.g. in GCP we'd recommend using Cloud SQL, so that you'll be able to back up/resize/configure the SQL instance on your own\r\n> > \r\n> > \r\n> > I am on AWS, but we deploy Kubeflow by our self. It's a full Kubeflow Installation.\r\n> \r\n> @brtasavpatel Then it's not on EKS I think, are you utilizing MySQL or RDS on AWS?\r\n\r\n@PatrickXYS Kubernetes is on EKS. I meant we didn't use any hosted Kubeflow Solutions, and I am utilizing MySQL",
"So the way I would go is to check the log of `mysql` pod, see if anything wrong there\r\n\r\nAnother question is:\r\n\r\n> Kubeflow was already running ~50 pipelines at the time when new pipeline runs were submitted resulting in about ~500 pipeline runs running.\r\n\r\nIs the issue happening everytime when you reached to the same level of workload?",
"> So the way I would go is to check the log of `mysql` pod, see if anything wrong there\r\n> \r\n> Another question is:\r\n> \r\n> > Kubeflow was already running ~50 pipelines at the time when new pipeline runs were submitted resulting in about ~500 pipeline runs running.\r\n> \r\n> Is the issue happening everytime when you reached to the same level of workload?\r\n\r\nWe probably first time got this burst of pipeline submissions, so would be hard to tell that.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n"
] | 2021-01-12T22:28:05 | 2022-04-28T18:00:12 | 2022-04-28T18:00:12 | NONE | null | ### What steps did you take:
Kubeflow was already running ~50 pipelines at the time when new pipeline runs were submitted resulting in about ~500 pipeline runs running.
Volume size : 20 gigs, where `mysql` deployment was running.
### What happened:
Pipeline runs failed to run with error :
```
{"error":"Failed to create a new run.: InternalServerError: Failed to store run abc-xyz to table:
Error 1114: The table 'run_details' is full","message":"Failed to create a new run.:
InternalServerError: Failed to store run abc-xyz to table: Error 1114: The table 'run_details' is full",
"code":13,"details":[{"@type":"type.googleapis.com/api.Error","error_message":"Internal Server Error",
"error_details":"Failed to create a new run.: InternalServerError: Failed to store run abc-xyz to table:
Error 1114: The table 'run_details' is full"}]}
```
Kubeflow UI was also unresponsive when trying to check a pipeline run status.
### What did you expect to happen:
Kubeflow UI should return/display appropriate error.
Installation instructions should have this limitation documented. Also there should be a way to auto-clean this particular table or any other table which might be taking large amount of storage to keep KFP functional.
### Environment:
KFP version: 1.0.1
KFP SDK version: 1.0.1
### Anything else you would like to add:
checking `mlpipelines.run_details` table showed that table size has grown to 20 gigs.
/kind bug
/area frontend
/area backend
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/4980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/4980/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/4976 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/4976/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/4976/comments | https://api.github.com/repos/kubeflow/pipelines/issues/4976/events | https://github.com/kubeflow/pipelines/issues/4976 | 784,436,336 | MDU6SXNzdWU3ODQ0MzYzMzY= | 4,976 | HTTP response body: RBAC: access denied from kfp_server_api when creating a tfx pipeline | {
"login": "PatrickGhosn",
"id": 33845447,
"node_id": "MDQ6VXNlcjMzODQ1NDQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/33845447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PatrickGhosn",
"html_url": "https://github.com/PatrickGhosn",
"followers_url": "https://api.github.com/users/PatrickGhosn/followers",
"following_url": "https://api.github.com/users/PatrickGhosn/following{/other_user}",
"gists_url": "https://api.github.com/users/PatrickGhosn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PatrickGhosn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PatrickGhosn/subscriptions",
"organizations_url": "https://api.github.com/users/PatrickGhosn/orgs",
"repos_url": "https://api.github.com/users/PatrickGhosn/repos",
"events_url": "https://api.github.com/users/PatrickGhosn/events{/privacy}",
"received_events_url": "https://api.github.com/users/PatrickGhosn/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1682717392,
"node_id": "MDU6TGFiZWwxNjgyNzE3Mzky",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/question",
"name": "kind/question",
"color": "2515fc",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"me too.\r\nwhen run from notebook server\r\nclient = kfp.Client()\r\nprint:\r\nERROR:root:Failed to get healthz info attempt 1 of 5.\r\nTraceback (most recent call last):\r\n File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp/_client.py\", line 312, in get_kfp_healthz\r\n response = self._healthz_api.get_healthz()\r\n File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api/healthz_service_api.py\", line 77, in get_healthz\r\n return self.get_healthz_with_http_info(**kwargs) # noqa: E501\r\n File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api/healthz_service_api.py\", line 162, in get_healthz_with_http_info\r\n collection_formats=collection_formats)\r\n File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api_client.py\", line 383, in call_api\r\n _preload_content, _request_timeout, _host)\r\n File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api_client.py\", line 202, in __call_api\r\n raise e\r\n File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api_client.py\", line 199, in __call_api\r\n _request_timeout=_request_timeout)\r\n File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api_client.py\", line 407, in request\r\n headers=headers)\r\n File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/rest.py\", line 248, in GET\r\n query_params=query_params)\r\n File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/rest.py\", line 238, in request\r\n raise ApiException(http_resp=r)\r\nkfp_server_api.exceptions.ApiException: (403)\r\nReason: Forbidden\r\nHTTP response headers: HTTPHeaderDict({'content-length': '19', 'content-type': 'text/plain', 'date': 'Wed, 13 Jan 2021 06:46:40 GMT', 'server': 'envoy', 'x-envoy-upstream-service-time': '0'})\r\nHTTP response body: RBAC: access denied",
"> me too.\r\n> when run from notebook server\r\n> client = kfp.Client()\r\n> print:\r\n> ERROR:root:Failed to get healthz info attempt 1 of 5.\r\n> Traceback (most recent call last):\r\n> File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp/_client.py\", line 312, in get_kfp_healthz\r\n> response = self._healthz_api.get_healthz()\r\n> File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api/healthz_service_api.py\", line 77, in get_healthz\r\n> return self.get_healthz_with_http_info(**kwargs) # noqa: E501\r\n> File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api/healthz_service_api.py\", line 162, in get_healthz_with_http_info\r\n> collection_formats=collection_formats)\r\n> File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api_client.py\", line 383, in call_api\r\n> _preload_content, _request_timeout, _host)\r\n> File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api_client.py\", line 202, in __call_api\r\n> raise e\r\n> File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api_client.py\", line 199, in __call_api\r\n> _request_timeout=_request_timeout)\r\n> File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api_client.py\", line 407, in request\r\n> headers=headers)\r\n> File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/rest.py\", line 248, in GET\r\n> query_params=query_params)\r\n> File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/rest.py\", line 238, in request\r\n> raise ApiException(http_resp=r)\r\n> kfp_server_api.exceptions.ApiException: (403)\r\n> Reason: Forbidden\r\n> HTTP response headers: HTTPHeaderDict({'content-length': '19', 'content-type': 'text/plain', 'date': 'Wed, 13 Jan 2021 06:46:40 GMT', 'server': 'envoy', 'x-envoy-upstream-service-time': '0'})\r\n> HTTP response body: RBAC: access denied\r\n\r\nFor the KFP client you need to call your public endpoint and set the IAP tokens.\r\n```\r\nclient = kfp.Client(host='https://test.endpoints.testexample.cloud.goog/pipeline',\r\n client_id='52352352525-fsd232354252352525.apps.googleusercontent.com',\r\n other_client_id='52352352525-sdfsdffqwfqwfqwfqwfqf.apps.googleusercontent.com',\r\n other_client_secret='ggewgerg-wefwg')\r\n```\r\n\r\nBut I can't find any option to do that with tfx.",
"Hi,\r\n\r\nI am facing the same issue when I try from jupyter. \r\nMoreover, I cannot even run a pipeline from UI. I get the following error:\r\n\r\n> Run creation failed\r\n> {\"error\":\"Failed to authorize the request.: BadRequestError: Experiment is required for CreateRun/CreateJob.: Missing experiment\",\"message\":\"Failed to authorize the request.: BadRequestError: Experiment is required for CreateRun/CreateJob.: Missing experiment\",\"code\":10,\"details\":[{\"@type\":\"type.googleapis.com/api.Error\",\"error_message\":\"Experiment is required for CreateRun/CreateJob.\",\"error_details\":\"Failed to authorize the request.: BadRequestError: Experiment is required for CreateRun/CreateJob.: Missing experiment\"}]}\r\n\r\nAre these issues related?\r\nIs there a fix on this or have we might done anything wrong during installation? \r\n",
"> Hi,\r\n> \r\n> I am facing the same issue when I try from jupyter.\r\n> Moreover, I cannot even run a pipeline from UI. I get the following error:\r\n> \r\n> > Run creation failed\r\n> > {\"error\":\"Failed to authorize the request.: BadRequestError: Experiment is required for CreateRun/CreateJob.: Missing experiment\",\"message\":\"Failed to authorize the request.: BadRequestError: Experiment is required for CreateRun/CreateJob.: Missing experiment\",\"code\":10,\"details\":[{\"@type\":\"type.googleapis.com/api.Error\",\"error_message\":\"Experiment is required for CreateRun/CreateJob.\",\"error_details\":\"Failed to authorize the request.: BadRequestError: Experiment is required for CreateRun/CreateJob.: Missing experiment\"}]}\r\n> \r\n> Are these issues related?\r\n> Is there a fix on this or have we might done anything wrong during installation?\r\nNo doesn't look related. Try reinstalling, I had a similar issue where experiments and pipelines were not opening from the UI. A reinstall solved it.",
"Facing the same issue, Any updates on this?\r\n\r\n```\r\nERROR:root:Failed to get healthz info attempt 5 of 5.\r\nTraceback (most recent call last):\r\n File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp/_client.py\", line 312, in get_kfp_healthz\r\n response = self._healthz_api.get_healthz()\r\n File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api/healthz_service_api.py\", line 77, in get_healthz\r\n return self.get_healthz_with_http_info(**kwargs) # noqa: E501\r\n File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api/healthz_service_api.py\", line 162, in get_healthz_with_http_info\r\n collection_formats=collection_formats)\r\n File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api_client.py\", line 383, in call_api\r\n _preload_content, _request_timeout, _host)\r\n File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api_client.py\", line 202, in __call_api\r\n raise e\r\n File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api_client.py\", line 199, in __call_api\r\n _request_timeout=_request_timeout)\r\n File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api_client.py\", line 407, in request\r\n headers=headers)\r\n File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/rest.py\", line 248, in GET\r\n query_params=query_params)\r\n File \"/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/rest.py\", line 238, in request\r\n raise ApiException(http_resp=r)\r\nkfp_server_api.exceptions.ApiException: (403)\r\nReason: Forbidden\r\nHTTP response headers: HTTPHeaderDict({'content-length': '19', 'content-type': 'text/plain', 'date': 'Fri, 15 Jan 2021 12:47:15 GMT', 'server': 'envoy', 'x-envoy-upstream-service-time': '0'})\r\nHTTP response body: RBAC: access denied\r\n\r\n```",
"Same issue here. The issue can be work-arounded by disabling RBAC in Istio (`clusterRbacConfig=OFF`). Which of course is not great for mutli-user setups.",
"Same issue.",
"You might find this issue useful: https://github.com/kubeflow/pipelines/issues/4440\r\nSpecifically these comments describe how to overcome this by adding a servicerolebinding and envoy filter: https://github.com/kubeflow/pipelines/issues/4440#issuecomment-687553259, https://github.com/kubeflow/pipelines/issues/4440#issuecomment-687689294, https://github.com/kubeflow/pipelines/issues/4440#issuecomment-687703390",
"Duplicate of #4440 ",
"For new clusters please see this comemnt from the issue 4440 \r\n\r\nhttps://github.com/kubeflow/pipelines/issues/4440#issuecomment-871927424\r\n\r\n```\r\ncat << EOF | kubectl apply -f -\r\napiVersion: security.istio.io/v1beta1\r\nkind: AuthorizationPolicy\r\nmetadata:\r\n name: bind-ml-pipeline-nb-kubeflow-user-example-com\r\n namespace: kubeflow\r\nspec:\r\n selector:\r\n matchLabels:\r\n app: ml-pipeline\r\n rules:\r\n - from:\r\n - source:\r\n principals: [\"cluster.local/ns/kubeflow-user-example-com/sa/default-editor\"]\r\n---\r\napiVersion: networking.istio.io/v1alpha3\r\nkind: EnvoyFilter\r\nmetadata:\r\n name: add-header\r\n namespace: kubeflow-user-example-com\r\nspec:\r\n configPatches:\r\n - applyTo: VIRTUAL_HOST\r\n match:\r\n context: SIDECAR_OUTBOUND\r\n routeConfiguration:\r\n vhost:\r\n name: ml-pipeline.kubeflow.svc.cluster.local:8888\r\n route:\r\n name: default\r\n patch:\r\n operation: MERGE\r\n value:\r\n request_headers_to_add:\r\n - append: true\r\n header:\r\n key: kubeflow-userid\r\n value: [email protected]\r\n workloadSelector:\r\n labels:\r\n notebook-name: test2\r\nEOF\r\n```\r\nIn my notebook \r\n```\r\n\r\nimport kfp\r\nclient = kfp.Client()\r\nprint(client.list_experiments())\r\n```\r\nOutput\r\n```\r\n{'experiments': [{'created_at': datetime.datetime(2021, 8, 12, 9, 14, 20, tzinfo=tzlocal()),\r\n 'description': None,\r\n 'id': 'b2e552e5-3324-483a-8ec8-b32894f49281',\r\n 'name': 'test',\r\n 'resource_references': [{'key': {'id': 'kubeflow-user-example-com',\r\n 'type': 'NAMESPACE'},\r\n 'name': None,\r\n 'relationship': 'OWNER'}],\r\n 'storage_state': 'STORAGESTATE_AVAILABLE'}],\r\n 'next_page_token': None,\r\n 'total_size': 1}\r\n```\r\n"
] | 2021-01-12T17:28:51 | 2021-08-12T13:26:24 | 2021-03-05T00:50:15 | NONE | null | /kind question
### What steps did you take:
Ran the below code:
![image](https://user-images.githubusercontent.com/33845447/104349736-f77b8280-550b-11eb-9e2c-907481d7485f.png)
### What happened:
I got the below:
```
HTTP response body: RBAC: access denied
ERROR:root:Failed to get healthz info attempt 4 of 5.
Traceback (most recent call last):
File "/home/jovyan/.local/lib/python3.6/site-packages/kfp/_client.py", line 312, in get_kfp_healthz
response = self._healthz_api.get_healthz()
File "/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api/healthz_service_api.py", line 77, in get_healthz
return self.get_healthz_with_http_info(**kwargs) # noqa: E501
File "/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api/healthz_service_api.py", line 162, in get_healthz_with_http_info
collection_formats=collection_formats)
File "/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api_client.py", line 383, in call_api
_preload_content, _request_timeout, _host)
File "/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api_client.py", line 202, in __call_api
raise e
File "/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api_client.py", line 199, in __call_api
_request_timeout=_request_timeout)
File "/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/api_client.py", line 407, in request
headers=headers)
File "/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/rest.py", line 248, in GET
query_params=query_params)
File "/home/jovyan/.local/lib/python3.6/site-packages/kfp_server_api/rest.py", line 238, in request
raise ApiException(http_resp=r)
kfp_server_api.exceptions.ApiException: (403)
Reason: Forbidden
HTTP response headers: HTTPHeaderDict({'content-length': '19', 'content-type': 'text/plain', 'date': 'Tue, 12 Jan 2021 17:25:02 GMT', 'server': 'envoy', 'x-envoy-upstream-service-time': '0'})
HTTP response body: RBAC: access denied
```
### What did you expect to happen:
Being able to create the pipeline
### Environment:
Kubeflow 1.2, Jupyter Notebook
How did you deploy Kubeflow Pipelines (KFP)?
KFP version:
build version v1beta1
KFP SDK version:
kfp 1.3.0
kfp-pipeline-spec 0.1.3.1
kfp-server-api 1.3.0
### Anything else you would like to add:
I'm using kubeflow in my username's namespace | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/4976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/4976/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/4984 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/4984/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/4984/comments | https://api.github.com/repos/kubeflow/pipelines/issues/4984/events | https://github.com/kubeflow/pipelines/issues/4984 | 784,823,544 | MDU6SXNzdWU3ODQ4MjM1NDQ= | 4,984 | Caching Documentation incorrect for 1.2 | {
"login": "fvde",
"id": 11456773,
"node_id": "MDQ6VXNlcjExNDU2Nzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/11456773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fvde",
"html_url": "https://github.com/fvde",
"followers_url": "https://api.github.com/users/fvde/followers",
"following_url": "https://api.github.com/users/fvde/following{/other_user}",
"gists_url": "https://api.github.com/users/fvde/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fvde/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fvde/subscriptions",
"organizations_url": "https://api.github.com/users/fvde/orgs",
"repos_url": "https://api.github.com/users/fvde/repos",
"events_url": "https://api.github.com/users/fvde/events{/privacy}",
"received_events_url": "https://api.github.com/users/fvde/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
},
{
"login": "rui5i",
"id": 31815555,
"node_id": "MDQ6VXNlcjMxODE1NTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/31815555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rui5i",
"html_url": "https://github.com/rui5i",
"followers_url": "https://api.github.com/users/rui5i/followers",
"following_url": "https://api.github.com/users/rui5i/following{/other_user}",
"gists_url": "https://api.github.com/users/rui5i/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rui5i/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rui5i/subscriptions",
"organizations_url": "https://api.github.com/users/rui5i/orgs",
"repos_url": "https://api.github.com/users/rui5i/repos",
"events_url": "https://api.github.com/users/rui5i/events{/privacy}",
"received_events_url": "https://api.github.com/users/rui5i/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I think this issue belongs over at https://github.com/kubeflow/website/. ",
"/area pipelines\r\n/area docs\r\n/priority p1\r\n/kind bug",
"Moving to pipelines for better tracking",
"Yes, it will be `cache-webhook-{namespace}`, so for kubeflow, it's -kubeflow\r\n\r\n@Ark-kun can you work on the documentation update?",
"/assign @rui5i ",
"Fixed by https://github.com/kubeflow/website/pull/2499"
] | 2021-01-12T10:19:15 | 2021-02-19T11:21:59 | 2021-02-19T11:21:59 | NONE | null | As far as I can tell the documentation for Caching (https://www.kubeflow.org/docs/pipelines/caching/) is incorrect for 1.2. The mutatingwebhookconfiguration is not called cache-webhook, but **cache-webhook-kubeflow**. I.e. all commands need to be updated from
`kubectl patch mutatingwebhookconfiguration cache-webhook -n ${NAMESPACE} --type='json' -p='[{"op":"replace", "path": "/webhooks/0/rules/0/operations/0", "value": "DELETE"}]'
`
to
`kubectl patch mutatingwebhookconfiguration cache-webhook-kubeflow -n ${NAMESPACE} --type='json' -p='[{"op":"replace", "path": "/webhooks/0/rules/0/operations/0", "value": "DELETE"}]'
` | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/4984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/4984/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/4975 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/4975/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/4975/comments | https://api.github.com/repos/kubeflow/pipelines/issues/4975/events | https://github.com/kubeflow/pipelines/issues/4975 | 783,805,330 | MDU6SXNzdWU3ODM4MDUzMzA= | 4,975 | Can't apply multiple filters on the same key for a given op | {
"login": "bencwallace",
"id": 12981236,
"node_id": "MDQ6VXNlcjEyOTgxMjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/12981236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bencwallace",
"html_url": "https://github.com/bencwallace",
"followers_url": "https://api.github.com/users/bencwallace/followers",
"following_url": "https://api.github.com/users/bencwallace/following{/other_user}",
"gists_url": "https://api.github.com/users/bencwallace/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bencwallace/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bencwallace/subscriptions",
"organizations_url": "https://api.github.com/users/bencwallace/orgs",
"repos_url": "https://api.github.com/users/bencwallace/repos",
"events_url": "https://api.github.com/users/bencwallace/events{/privacy}",
"received_events_url": "https://api.github.com/users/bencwallace/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2021-01-12T00:27:28 | 2021-01-20T23:57:04 | 2021-01-20T23:57:04 | CONTRIBUTOR | null | ### What steps did you take:
In a filter with more than one predicate using the same op **and** key, all but one of the predicates will be ignored. This is mostly relevant for the `NOT_EQUALS` and `IS_SUBSTRING` ops, although there might be unusual situations where someone wants multiple such predicates with other ops. At present, making a request with such a filter does not raise any errors or warnings despite this surprising behavior.
An example of such a filter, serialized to JSON, is the following:
```json
{ "predicates": [{
"op": "IS_SUBSTRING",
"key": "name",
"stringValue": "a"
}, {
"op": "IS_SUBSTRING",
"key": "name",
"stringValue": "b"
}]
}
```
### What happened:
Only results with name containing "b" were returned.
### What did you expect to happen:
All results containing both "a" and "b" should be returned.
### Environment:
How did you deploy Kubeflow Pipelines (KFP)?
I did not deploy it myself, but am using a full deployment in the cloud.
KFP version: Build [743746b](https://github.com/kubeflow/pipelines/tree/743746b96e6efc502c33e1e529f2fe89ce09481c)
KFP SDK version: 1.0.4 (don't think this is an SDK issue but I used the SDK to make the request)
### Anything else you would like to add:
This issue seems to stem from the [definition](https://github.com/kubeflow/pipelines/blob/c51c96a973f8b94d0f993b0b292d58008fe05607/backend/src/apiserver/filter/filter.go#L33-L46) of filter. I think I can fix this myself, if the KFP team thinks it's worth doing.
I think the best thing to do would be to start by changing the types of all fields of `Filter` from `map[string]interface{}` to `map[string][]interface{}`. Although most users would likely not apply multiple `EQUALS` (for example) filters, making this change would still result in more reasonable behavior (e.g. currently, if you have two filters that specify 'name' to be equal to different strings, you can get one result back, whereas you should get no results back).
/kind bug
/area backend | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/4975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/4975/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/4973 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/4973/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/4973/comments | https://api.github.com/repos/kubeflow/pipelines/issues/4973/events | https://github.com/kubeflow/pipelines/issues/4973 | 783,269,675 | MDU6SXNzdWU3ODMyNjk2NzU= | 4,973 | "driver: bad connection" when using Managed storage via Google Cloud Marketplace deployment | {
"login": "Svendegroote91",
"id": 33965133,
"node_id": "MDQ6VXNlcjMzOTY1MTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/33965133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Svendegroote91",
"html_url": "https://github.com/Svendegroote91",
"followers_url": "https://api.github.com/users/Svendegroote91/followers",
"following_url": "https://api.github.com/users/Svendegroote91/following{/other_user}",
"gists_url": "https://api.github.com/users/Svendegroote91/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Svendegroote91/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Svendegroote91/subscriptions",
"organizations_url": "https://api.github.com/users/Svendegroote91/orgs",
"repos_url": "https://api.github.com/users/Svendegroote91/repos",
"events_url": "https://api.github.com/users/Svendegroote91/events{/privacy}",
"received_events_url": "https://api.github.com/users/Svendegroote91/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1682717392,
"node_id": "MDU6TGFiZWwxNjgyNzE3Mzky",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/question",
"name": "kind/question",
"color": "2515fc",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"@Svendegroote91 Sorry for the late reply, reading through your description, it sounds like\r\n> database username & password left empty to default to root user without password.\r\n\r\nthis part is wrong, you need to type `root` into database username, it's not the default when you leave it to empty.",
"I am having exactly the same problem, but I fill the database username as `root`.\r\n\r\nThe biggest difference is that my cluster is deployed in us-central1.",
"Hi @andreclaudino, can you try to inspect logs from the cloudsql proxy pod? It should give us more info to identify the root cause",
"I had almost same problem (except for resources region)\r\n\r\n> can you try to inspect logs from the cloudsql proxy pod?\r\n\r\nIn my case, I found:\r\n\r\n> Cloud SQL Admin API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/sqladmin.googleapis.com/overview?project=PROJECT_ID then retry.\r\n\r\nAnd activating the API fixed the problem. Hope this helps.",
"Closing because problem seems resolved. Feel free to comment if that's not the case"
] | 2021-01-11T10:58:53 | 2021-04-09T11:04:38 | 2021-04-09T11:04:38 | NONE | null | ### What steps did you take:
- Using GCP default VPC network settings
- Created multi-regional EU Google Cloud Storage bucket
- Create mySQL (v5.7) instance in europe-west1-d with only root user and no password (default settings: Public IP - I also tried enabling the private IP but that also did not resolve the issue)
- Configured AI Platform Pipelines (Google's KFP managed service) using Managed Storage
(params: storage bucket name: bucket name (without gs://), sql connection name: connection name copied from the mySQL UI, database prefix: 'db-test-sven', database username & password left empty to default to root user without password.
### What happened:
Deployment hangs on 'ml-pipeline' component that reports `driver: bad connection` as critical and a continuous stream of errors : `[mysql] 2021/01/11 10:52:23 packets.go:36: unexpected EOF`
Also the 'metadata-grpc-deployment' fails to start with following errors:
```
2021-01-11 10:52:07.160738: F ml_metadata/metadata_store/metadata_store_server_main.cc:220] Non-OK-status: status status: Internal: mysql_real_connect failed: errno: 2002, error: Can't connect to MySQL server on 'mysql' (115)MetadataStore cannot be created with the given connection config.
```
```
2021-01-11 10:52:08.781095: F ml_metadata/metadata_store/metadata_store_server_main.cc:220] Non-OK-status: status status: Internal: mysql_real_connect failed: errno: 2013, error: Lost connection to MySQL server at 'handshake: reading inital communication packet', system error: 11MetadataStore cannot be created with the given connection config.
```
It appears the mySQL connection cannot be instantiated, any help on how to resolve this?
### What did you expect to happen:
Deployment to successfully finish.
How did you deploy Kubeflow Pipelines (KFP)?
Google Cloud Marketplace
KFP version: 1.0.4
/kind bug
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
/area backend
// /area sdk
// /area testing
// /area engprod
-->
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/4973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/4973/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/4972 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/4972/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/4972/comments | https://api.github.com/repos/kubeflow/pipelines/issues/4972/events | https://github.com/kubeflow/pipelines/issues/4972 | 783,259,860 | MDU6SXNzdWU3ODMyNTk4NjA= | 4,972 | Experiment Run status became unknown | {
"login": "shawnho1018",
"id": 28553789,
"node_id": "MDQ6VXNlcjI4NTUzNzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/28553789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shawnho1018",
"html_url": "https://github.com/shawnho1018",
"followers_url": "https://api.github.com/users/shawnho1018/followers",
"following_url": "https://api.github.com/users/shawnho1018/following{/other_user}",
"gists_url": "https://api.github.com/users/shawnho1018/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shawnho1018/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shawnho1018/subscriptions",
"organizations_url": "https://api.github.com/users/shawnho1018/orgs",
"repos_url": "https://api.github.com/users/shawnho1018/repos",
"events_url": "https://api.github.com/users/shawnho1018/events{/privacy}",
"received_events_url": "https://api.github.com/users/shawnho1018/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 930619511,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTE=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/priority/p0",
"name": "priority/p0",
"color": "db1203",
"default": false,
"description": ""
},
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"We can see the same issue with Kubeflow 1.2 with Dex on Kubernetes. Everything works fine after the first authentication attempt:\r\n\r\n![image](https://user-images.githubusercontent.com/15926980/105490034-757e1d00-5cb4-11eb-8bc9-f1829e0ddf78.png)\r\n\r\nI could even log out and log in again and it would still work.\r\nThen, few hours later, after re-authentication, the same pipeline is not correctly shown in the UI:\r\n\r\n![image](https://user-images.githubusercontent.com/15926980/105490095-934b8200-5cb4-11eb-99bd-37a0dc67862a.png)\r\n\r\n",
"I have a similar issue with kubeflow 1.2 Dex. When I create a run, the status becomes unknown, and It stucks.",
"@DavidSpek we have the same issue: KFv1.2, K8s 1.17 installation kfctl v1.2 and Dex Multiauth.\r\nThe Experiments UI is broken and says\"status unknown\", even if we clean pods from hand it still doesnt get the update infos in UI.\r\n\r\nCould you please help or anyone an idea?\r\nThx!",
"Hi all, can you try https://github.com/kubeflow/pipelines/issues/3763#issuecomment-628379351 to verify if this is a duplicate of that issue?\r\n\r\n> Workaround:\r\n> run kubectl delete pod ml-pipeline-persistenceagent-xxxxxxx-xxxx -n kubeflow to restart persistence agent.",
"/cc @yanniszark \r\nBecause this only happens in istio-dex installation, can you take a look?",
"Still happens for me too, using dex. https://github.com/kubeflow/pipelines/issues/3763#issuecomment-628379351 worked for me. So this indicates some kind of race issue.",
"@Bobgy I think “istio-dex” might just be synonymous for multi-user installations, it might not be directly related to Dex. ",
"Duplicate of https://github.com/kubeflow/pipelines/issues/3763",
"Let's concentrate discussion on the canonical issue then"
] | 2021-01-11T10:44:18 | 2021-03-19T05:39:06 | 2021-03-19T05:39:06 | NONE | null | ### What steps did you take:
1. I was installing kubeflow 1.2 with Dex on kubernetes cluster, following [this tutorial](https://www.kubeflow.org/docs/started/k8s/kfctl-istio-dex/).
2. I tested pipeline deployment using [this tutorial](https://github.com/kubeflow/pipelines/blob/b604c6171244cc1cd80bfdc46248eaebf5f985d6/samples/tutorials/Data%20passing%20in%20python%20components/Data%20passing%20in%20python%20components%20-%20Files.py).
### What happened:
The entire deployment was successful on the day when kubeflow was setup. All pipelines pods and kubeflow UI are behaved correctly.
However, in the following day (I suspect my SSO expired), even though I swapped a valid token, the experiment run UI always show the status is unknown. However, all pods are executed successfully when pipeline is deployed.
### What did you expect to happen:
The experiment run UI should display correctly
### Environment:
kubeflow 1.2
GKE: 1.18.12
How did you deploy Kubeflow Pipelines (KFP)?
I deploy it along with kubeflow installation.
KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. -->
Version: 1.0.4
KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp -->
kfp 1.3.0
kfp-pipeline-spec 0.1.3.1
kfp-server-api 1.3.0
### Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind bug
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/4972/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/4972/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/4970 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/4970/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/4970/comments | https://api.github.com/repos/kubeflow/pipelines/issues/4970/events | https://github.com/kubeflow/pipelines/issues/4970 | 782,702,018 | MDU6SXNzdWU3ODI3MDIwMTg= | 4,970 | Filter only supports one op per predicate | {
"login": "bencwallace",
"id": 12981236,
"node_id": "MDQ6VXNlcjEyOTgxMjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/12981236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bencwallace",
"html_url": "https://github.com/bencwallace",
"followers_url": "https://api.github.com/users/bencwallace/followers",
"following_url": "https://api.github.com/users/bencwallace/following{/other_user}",
"gists_url": "https://api.github.com/users/bencwallace/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bencwallace/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bencwallace/subscriptions",
"organizations_url": "https://api.github.com/users/bencwallace/orgs",
"repos_url": "https://api.github.com/users/bencwallace/repos",
"events_url": "https://api.github.com/users/bencwallace/events{/privacy}",
"received_events_url": "https://api.github.com/users/bencwallace/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I've realized this is, in fact, a bug (rather than a feature request) and opened a more detailed issue [here](https://github.com/kubeflow/pipelines/issues/4975)."
] | 2021-01-09T23:05:30 | 2021-01-12T00:27:35 | 2021-01-12T00:26:14 | CONTRIBUTOR | null | In a filter with more than one predicate using the same op, all but one of these predicates will be ignored. Making a request with such a filter does not raise any errors or warnings despite this surprising behavior. An example of such a filter, serialized to JSON, is the following:
```json
{ "predicates": [{
"op": "IS_SUBSTRING",
"key": "name",
"stringValue": "a"
}, {
"op": "IS_SUBSTRING",
"key": "name",
"stringValue": "b"
}]
}
```
In this case, only results with `name` containing "b" will be returned. It would nice be get all results containing "a" and "b".
This issue seems to stem from the backend [definition](https://github.com/kubeflow/pipelines/blob/master/backend/src/apiserver/filter/filter.go#L33-L46) of filter. Until this is resolved, it would be good to at least attach a note about it in the documentation.
Update: I've started taking a closer look and might be able to extend this functionality if the KubeFlow team thinks it's a good idea! | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/4970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/4970/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/4969 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/4969/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/4969/comments | https://api.github.com/repos/kubeflow/pipelines/issues/4969/events | https://github.com/kubeflow/pipelines/issues/4969 | 782,200,913 | MDU6SXNzdWU3ODIyMDA5MTM= | 4,969 | Tensorflow type 21 not convertible to numpy dtype. | {
"login": "shuklashashank",
"id": 7547493,
"node_id": "MDQ6VXNlcjc1NDc0OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7547493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shuklashashank",
"html_url": "https://github.com/shuklashashank",
"followers_url": "https://api.github.com/users/shuklashashank/followers",
"following_url": "https://api.github.com/users/shuklashashank/following{/other_user}",
"gists_url": "https://api.github.com/users/shuklashashank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shuklashashank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shuklashashank/subscriptions",
"organizations_url": "https://api.github.com/users/shuklashashank/orgs",
"repos_url": "https://api.github.com/users/shuklashashank/repos",
"events_url": "https://api.github.com/users/shuklashashank/events{/privacy}",
"received_events_url": "https://api.github.com/users/shuklashashank/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1499519734,
"node_id": "MDU6TGFiZWwxNDk5NTE5NzM0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/upstream_issue",
"name": "upstream_issue",
"color": "006b75",
"default": false,
"description": ""
},
{
"id": 2710158147,
"node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info",
"name": "needs more info",
"color": "DBEF12",
"default": false,
"description": ""
}
] | closed | false | {
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"The `pickled_object` is not a string, it's a binary blob. It might get corrupted when being written to a text file. This is probably the problem you're having.\r\n\r\nThe object is also likely pretty big.\r\n\r\nFor non-text or non-small data you should use file-based data passing.\r\nPlease check https://github.com/kubeflow/pipelines/blob/23c12a9fb48417bd9c90419195b4f9c33e913aa7/samples/tutorials/Data%20passing%20in%20python%20components/Data%20passing%20in%20python%20components%20-%20Files.py#L126\r\n\r\n```python\r\n@create_component_from_func\r\ndef data_investigation(bunch_of_stuff_path: OutputPath()):\r\n ....\r\n d = {\"train_ds\":train_ds,\"val_ds\":val_ds,\"class_names\":class_names}\r\n with open(bunch_of_stuff_path, 'wb') as bunch_of_stuff_file:\r\n pickle.dump(d, bunch_of_stuff_file)\r\n```\r\n\r\n>d = {\"train_ds\":train_ds,\"val_ds\":val_ds,\"class_names\":class_names}\r\n\r\nIt's usually better to output everything separately\r\n\r\n> pickle.dumps\r\n\r\nIt's best to avoid pickle. It's python-specific and and there might be some incompatibilities.",
"Hi Ark-kun\r\n\r\nI tried with `pickle.dumps` approach still getting above mentioned issues. Is there any better approach to accomplish this above task.\r\nAnother approach I tried with output different approach.\r\n`\r\nError: a bytes-like object is required, not 'BatchDataset`",
"Can you please provide a minimal PoC?",
"This error `tensorflow.python.framework.errors_impl.InternalError: Tensorflow type 21 not convertible to numpy dtype.` happens even using the `dump` method",
"Closing this issue since this is a Tensorflow issue unrelated to Kubeflow Pipelines.\r\nPlease file a bug for the Tensorflow team.\r\nThank you."
] | 2021-01-08T15:33:11 | 2021-03-18T22:20:06 | 2021-03-18T22:20:05 | NONE | null | Hi,
We are trying to use the [Classification](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/classification.ipynb) method with kubflow pipelines, we've divided the code into multiple segments. In the data-investigation section, we have to return the object and use this in other sections so we used the `pickle.dumps` method for converting the object to a string and we are facing the below error.
> "tensorflow.python.framework.errors_impl.InternalError: Tensorflow type 21 not convertible to numpy dtype."
<img width="1219" alt="Screenshot 2021-01-08 at 8 49 09 PM" src="https://user-images.githubusercontent.com/7547493/104032405-220bc980-51f4-11eb-89f0-8ef74026061d.png">
Code-Snippet
```
`@func_to_container_op
def data_investigation()-> str:
....
....
....
d = {"train_ds":train_ds,"val_ds":val_ds,"class_names":class_names}
pickled_object = pickle.dumps(d)
return pickled_object`
....
....
....
@dsl.pipeline(name='Classification_pipeline',description='Classification')
def classification_pipeline():
data = data_investigation()
model_data = model_fitting(data.output)
train_data = training(model_data.output)
data_aug = data_augmentations_func(train_data.output)
``` | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/4969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/4969/timeline | null | completed | null | null | false |