Compare commits

...

72 Commits

Author SHA1 Message Date
wxiaoguang
c0b55670dc Make Gitea 1.22 could be compiled with Go 1.24 (#33643)
In case some users are still using Gitea 1.22
2025-02-19 00:11:43 -08:00
techknowlogick
2d7a0e3a8b bump x/net (#32896) (#32900)
backport #32896
2024-12-19 09:27:41 +08:00
Lunny Xiao
8eefa1f6de Add missing two sync feed for refs/pull (#32815) (#32822)
Fowllow #32659
Backport #32815
2024-12-13 08:25:22 +00:00
Lunny Xiao
d172c6d2b0 Add changelog for 1.22.6 (#32825) 2024-12-13 15:55:25 +08:00
Lunny Xiao
c630b88f35 Fix misuse of PublicKeyCallback(#32810) (#32824)
Backport #32810
2024-12-13 07:34:52 +00:00
Giteabot
e7de2fc136 Fix lfs migration (#32812) (#32818)
Backport #32812 by @hiifong

Fix: #32803

![image](https://github.com/user-attachments/assets/3ea1f4e0-e26f-4a15-957e-dd6caf91deb1)

![image](https://github.com/user-attachments/assets/44b99624-c347-4f2d-a11c-13ec1276eea2)

Co-authored-by: hiifong <i@hiif.ong>
2024-12-13 11:41:12 +08:00
Giteabot
4fe19fc722 Avoid MacOS keychain dialog in integration tests (#32813) (#32816)
Backport #32813 by @bohde

Mac's git installation ships with a system wide config that configures
the credential helper `osxkeychain`, which will prompt the user with a
dialog.

```
$ git config list --system 
credential.helper=osxkeychain
```
By setting the environment variable
[`GIT_CONFIG_NOSYSTEM=true`](https://git-scm.com/docs/git-config#ENVIRONMENT),
Git will not load the system wide config, preventing the dialog from
populating.

Closes #26717

Co-authored-by: Rowan Bohde <rowan.bohde@gmail.com>
2024-12-13 05:41:02 +08:00
techknowlogick
b54b6d103f use specific namespace labels 2024-12-12 15:45:55 -05:00
Giteabot
84ce417312 use dedicated runners for release artifacts (#32811) (#32814)
Backport #32811 by @techknowlogick

GH runners are having trouble, so switch the remaining release jobs to
use dedicated runners.

Co-authored-by: techknowlogick <techknowlogick@gitea.com>
2024-12-12 15:44:58 -05:00
Lunny Xiao
c0092af2e0 Add changelog for 1.22.5 (#32794) 2024-12-12 04:56:35 +08:00
Lunny Xiao
6092bbac4d 🐛 Fixe a keystring misuse and refactor duplicates keystrings (#32668) (#32792)
Backport #32668 

- Fixes a translation keystring misuse where the string 'open
milestones' is used in place of 'closed milestones'.
- De-duplicates the use of 'open milesones' and 'closed milestones'
keystrings on the sidebar of an issue, reusing the ones on the issues
filter and action bars.
- Closes #32667

Co-authored-by: Simon Pistache <105607989+SimonPistache@users.noreply.github.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2024-12-11 13:32:10 -05:00
Lunny Xiao
e4ca557fd0 Upgrade crypto library (#32791)
backport #32750
2024-12-11 13:18:58 -05:00
Giteabot
ef9d1e9002 Add standard-compliant route to serve outdated R packages (#32783) (#32789)
Backport #32783 by Sebastian-T-T

The R package repository currently does not have support for older
versions of packages which should be stored in a separate /Archive
router. This PR remedies that by adding a new path router.

Fixes #32782

Co-authored-by: Sebastian T. T. <109338575+Sebastian-T-T@users.noreply.github.com>
2024-12-11 16:49:06 +00:00
Giteabot
0c7e44fcf7 Fix internal server error when updating labels without write permission (#32776) (#32785) 2024-12-10 17:22:03 -08:00
Giteabot
3a9039bc95 Make wiki pages visit fast (#32732) (#32745)
Backport #32732 by @lunny

Fix #20156

We reuse the code from the repository code view instead of the current
code.
Previously it took `5653ms` for
https://gitea.com/henri/wiki/wiki/?action=_pages in my local machine,
now it's about `300ms` .

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2024-12-07 02:54:29 +08:00
silverwind
063655c391 Bump relative-time-element to v4.4.4 (#32739)
Backport https://github.com/go-gitea/gitea/pull/32730 to v1.22
2024-12-06 15:39:45 +01:00
Giteabot
eee16e433c Fix fork page branch selection (#32711) (#32725)
Backport #32711 by @lunny

Fix #32709

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2024-12-05 18:25:14 +00:00
KN4CK3R
0c4c28bc29 Add Swift login endpoint (#32693) (#32701)
Backport of #32693

Fix #32683

This PR adds the login endpoint and fixes the documentation links.
2024-12-06 01:53:55 +08:00
Giteabot
d8ad9228ca Fix gogit GetRefCommitID (#32705) (#32712)
Backport #32705 by @Zettat123

Fix #32335

When we call `GetRefCommitID` and the reference is already a commit ID,
the `GetRefCommitID` with go-git will return a `NotExist` error. This PR
improves the `GetRefCommitID` for go-git. If the input is already a
commit ID, it will be returned directly.

Co-authored-by: Zettat123 <zettat123@gmail.com>
2024-12-04 07:59:48 +00:00
Giteabot
0d1fc2b2e9 Fix delete branch perm checking (#32654) (#32707)
Backport #32654 by @lunny

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2024-12-04 13:25:35 +08:00
Giteabot
a332805f6e Fix word overflow in file search page (#32695) (#32699)
Backport #32695 by yp05327

Co-authored-by: yp05327 <576951401@qq.com>
2024-12-04 08:19:43 +08:00
Giteabot
4b73e92264 Fix race condition in mermaid observer (#32599) (#32673)
Backport #32599 by william-allspice
2024-11-29 19:44:41 +08:00
Giteabot
27489f2e0b Don't create action when syncing mirror pull refs (#32659) (#32664)
Backport #32659 by @lunny

Fix #27961

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2024-11-29 02:13:16 +08:00
Lunny Xiao
293355777f Add release note for v1.22.4 (#32513)
Add release note for v1.22.4

---------

Co-authored-by: Kyle D. <kdumontnu@gmail.com>
2024-11-26 03:01:54 +08:00
Lunny Xiao
cf1a38b03d Fix get reviewers' bug (#32415) (#32616)
This PR rewrites `GetReviewer` function and move it to service layer.

Reviewers should not be watchers, so that this PR removed all watchers
from reviewers. When the repository is under an organization, the pull
request unit read permission will be checked to resolve the bug of

Fix #32394
Backport #32415
2024-11-23 12:42:58 +08:00
Lunny Xiao
073ba977fc Fix clean tmp dir (#32360) (#32593)
Backport #32360 

Try to fix #31792 

Credit to @jeroenlaylo
Copied from
https://github.com/go-gitea/gitea/issues/31792#issuecomment-2311920520

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2024-11-22 08:50:35 +00:00
Lunny Xiao
2b8b2772fd Fix PR creation on forked repositories (#31863) (#32591)
Resolves #20475
Backport #31863

Co-authored-by: Job <LordChunk@users.noreply.github.com>
2024-11-22 08:12:40 +00:00
Lunny Xiao
87ceecfb3a Fix the missing menu in organization project view page (#32313) (#32592)
Backport #32313 

#29248 didn't modify the view page.
The class name is not good enough, so this is a quick fix.

Before:
org:

![image](https://github.com/user-attachments/assets/3e26502d-66b4-4043-ab03-003ba7391487)
user:

![image](https://github.com/user-attachments/assets/9b22b90c-d63c-4228-acad-4d9fb20590ac)

After:
org:

![image](https://github.com/user-attachments/assets/21bf98a7-8a5b-4dc6-950a-88f529e36450)
user: (no change)

![image](https://github.com/user-attachments/assets/fea0dcae-3625-44e8-bb9e-4c3733da8764)

Co-authored-by: yp05327 <576951401@qq.com>
2024-11-22 01:50:34 +00:00
Lunny Xiao
c2598b4642 Support HTTP POST requests to /userinfo, aligning to OpenID Core specification (#32578) (#32594) 2024-11-21 07:22:18 -08:00
wxiaoguang
a290aab0e8 Fix debian package clean up (#32351) (#32590)
Partially backport #32351
2024-11-21 06:27:02 +00:00
Giteabot
8f6cc95734 Fix GetInactiveUsers (#32540) (#32588)
Backport #32540 by @lunny

Fix #31480

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2024-11-21 13:25:36 +08:00
Rowan Bohde
0b5da27570 allow the actions user to login via the jwt token (#32527) (#32580)
Backport #32527

We have some actions that leverage the Gitea API that began receiving
401 errors, with a message that the user was not found. These actions
use the `ACTIONS_RUNTIME_TOKEN` env var in the actions job to
authenticate with the Gitea API. The format of this env var in actions
jobs changed with go-gitea/gitea/pull/28885 to be a JWT (with a
corresponding update to `act_runner`) Since it was a JWT, the OAuth
parsing logic attempted to parse it as an OAuth token, and would return
user not found, instead of falling back to look up the running task and
assigning it to the actions user.

Make ACTIONS_RUNTIME_TOKEN in action runners could be used, attempting
to parse Oauth JWTs. The code to parse potential old
`ACTION_RUNTIME_TOKEN` was kept in case someone is running an older
version of act_runner that doesn't support the Actions JWT.
2024-11-21 03:18:00 +00:00
wxiaoguang
81ec66c257 Fix submodule parsing (#32571) (#32577)
A quick fix for #32568
Partially backport from #32571
2024-11-21 10:32:19 +08:00
Giteabot
3661b14d97 Remove unnecessary code (#32560) (#32567)
Backport #32560 by @lunny

PushMirrors only be used in the repository setting page. So it should
not be loaded on every repository page.

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2024-11-20 02:55:59 +08:00
Lunny Xiao
cf2d332443 Refactor find forks and fix possible bugs that weak permissions check (#32528) (#32547)
Backport #32528

- Move models/GetForks to services/FindForks
- Add doer as a parameter of FindForks to check permissions
- Slight performance optimization for get forks API with batch loading
of repository units
- Add tests for forking repository to organizations

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2024-11-19 04:08:32 +00:00
Giteabot
1b7031c5c2 Fix some places which doesn't repsect org full name setting (#32243) (#32550)
Backport #32243 by @lunny

Partially fix #31345

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2024-11-19 02:49:29 +00:00
Lunny Xiao
673fee427e Refactor push mirror find and add check for updating push mirror (#32539) (#32549)
backport #32539

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2024-11-18 23:55:27 +08:00
wxiaoguang
578c02d652 Improve some sanitizer rules (#32534)
This is a backport-only fix for 1.22

1.23 has a proper fix #32533
2024-11-18 03:42:30 +00:00
Giteabot
6555cfcac3 Fix basic auth with webauthn (#32531) (#32536)
Backport #32531 by @lunny

WebAuthn should behave the same way as TOTP. When enabled, basic auth
with username/password should need to WebAuthn auth, otherwise returned
401.

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2024-11-16 18:21:00 +00:00
Giteabot
b6eef34874 Fix artifact v4 upload above 8MB (#31664) (#32523) 2024-11-16 09:15:33 -08:00
Giteabot
d03dd04d65 Remove transaction for archive download (#32186) (#32520)
Backport #32186 by @lunny

Since there is a status column in the database, the transaction is
unnecessary when downloading an archive. The transaction is blocking
database operations, especially with SQLite.

Replace #27563

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2024-11-15 10:27:38 +01:00
Giteabot
257ce61023 Fix oauth2 error handle not return immediately (#32514) (#32516)
Backport #32514 by lunny

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2024-11-15 11:27:04 +08:00
Lunny Xiao
781310df77 Trim title before insert/update to database to match the size requirements of database (#32498) (#32507) 2024-11-14 18:06:31 -08:00
Giteabot
f79f8e13e3 Fix nil panic if repo doesn't exist (#32501) (#32502)
Backport #32501 by wxiaoguang

fix  #32496

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2024-11-14 04:47:04 +00:00
Giteabot
a4263d341c Add a doctor check to disable the "Actions" unit for mirrors (#32424) (#32497)
Backport #32424 by @Zettat123

Resolve #32232

Users can disable the "Actions" unit for all mirror repos by running 
```
gitea doctor check --run  disable-mirror-actions-unit --fix
```

Co-authored-by: Zettat123 <zettat123@gmail.com>
2024-11-13 18:47:56 +00:00
6543
52a66d78d4 Update nix development environment vor v1.22.x (#32495)
just bump:

 * golang:  v1.22.2 ->  v1.22.9
 * nodejs: v20.12.2 -> v20.18.0
 * python: v3.12.3 -> v3.12.7
2024-11-13 12:40:52 -05:00
wxiaoguang
ef339713c2 Refactor internal routers (partial backport, auth token const time comparing) (#32473) (#32479)
Partially backport #32473. LFS related changes are not in 1.22, so skip
them.

1. Ignore non-existing repos during migrations
2. Improve ReadBatchLine's comment
3. Use `X-Gitea-Internal-Auth` header for internal API calls and make
the comparing constant time (it wasn't a serous problem because in a
real world it's nearly impossible to timing-attack the token, but indeed
security related and good to fix and backport)
4. Fix route mock nil check
2024-11-13 10:26:37 +08:00
wxiaoguang
26437a03b0 Disable Oauth check if oauth disabled (#32368) (#32480)
Partially backport Disable Oauth check if oauth disabled #32368
2024-11-12 06:09:47 +00:00
Giteabot
b48df1082e cargo registry - respect renamed dependencies (#32430) (#32478)
Backport #32430 by usbalbin

Co-authored-by: Albin Hedman <albin9604@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2024-11-12 03:26:26 +00:00
Giteabot
eb5733636b Fix broken releases when re-pushing tags (#32435) (#32449)
Backport #32435 by @Zettat123

Fix #32427

---------

Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2024-11-10 23:49:59 +00:00
Giteabot
62d8433194 Fix mermaid diagram height when initially hidden (#32457) (#32464)
Backport #32457 by @silverwind

In a hidden iframe, `document.body.clientHeight` is not reliable. Use
`IntersectionObserver` to detect the visibility change and update the
height there.

Fixes: https://github.com/go-gitea/gitea/issues/32392

<img width="885" alt="image"
src="https://github.com/user-attachments/assets/a95ef6aa-27e7-443f-9d06-400ef27919ae">

Co-authored-by: silverwind <me@silverwind.io>
2024-11-11 04:05:42 +08:00
Giteabot
22a93c1cdc Only provide the commit summary for Discord webhook push events (#32432) (#32447)
Backport #32432 by @kemzeb

Resolves #32371.

#31970 should have just showed the commit summary, but
`strings.SplitN()` was misused such that we did not perform any
splitting at all and just used the message. This was not caught in the
unit test made in that PR since the test commit summary was > 50 (which
truncated away the commit description).

This snapshot resolves this and adds another unit test to ensure that we
only show the commit summary.

Co-authored-by: Kemal Zebari <60799661+kemzeb@users.noreply.github.com>
2024-11-08 09:13:49 +08:00
Lunny Xiao
16e51e91a1 Only query team tables if repository is under org when getting assignees (#32414) (#32426)
backport #32414 

It's unnecessary to query the team table if the repository is not under
organization when getting assignees.
2024-11-06 11:22:11 +08:00
wxiaoguang
936847b3da Quick fix milestone deadline 9999 for 1.22 (#32423) 2024-11-05 14:13:19 +08:00
Lunny Xiao
7430d069b3 Fix created_unix for mirroring (#32342) (#32406)
Fix #32233
Backport #32342
2024-11-05 11:43:30 +08:00
Lunny Xiao
a3b7b98336 Fix broken image when editing comment with non-image attachments (#32319) (#32345)
Backport #32319 

Fix #32316

---------

Co-authored-by: yp05327 <576951401@qq.com>
2024-11-02 13:34:09 +08:00
Zettat123
898f852d03 Fix missing signature key error when pulling Docker images with SERVE_DIRECT enabled (#32365) (#32397)
Backport #32365

Fix #28121

I did some tests and found that the `missing signature key` error is
caused by an incorrect `Content-Type` header. Gitea correctly sets the
`Content-Type` header when serving files.


348d1d0f32/routers/api/packages/container/container.go (L712-L717)
However, when `SERVE_DIRECT` is enabled, the `Content-Type` header may
be set to an incorrect value by the storage service. To fix this issue,
we can use query parameters to override response header values.

https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html <img
width="600px"

src="https://github.com/user-attachments/assets/f2ff90f0-f1df-46f9-9680-b8120222c555"
/>

In this PR, I introduced a new parameter to the `URL` method to support
additional parameters.

```
URL(path, name string, reqParams url.Values) (*url.URL, error)
```
2024-11-01 03:53:59 +00:00
6543
9d62d7a443 Respect UI.ExploreDefaultSort setting again (#32357) (#32385)
Backport #32357

fix regression of https://github.com/go-gitea/gitea/pull/29430

---
*Sponsored by Kithara Software GmbH*
2024-10-31 13:49:09 +08:00
Lunny Xiao
bf53ab26fa Fix disable 2fa bug (#32320) (#32330)
Backport #32320
2024-10-25 17:54:56 +08:00
Zettat123
0d11ba93dd Fix the permission check for user search API and limit the number of returned users for /user/search (#32310)
Partially backport #32288

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2024-10-23 04:56:13 +00:00
Lunny Xiao
b7d12347f3 Add warn log when deleting inactive users (#32318) (#32321)
Backport #32318 

Add log for the problem #31480
2024-10-23 10:48:42 +08:00
6543
b6f8372d7d API: enhance SearchIssues swagger docs (#32208) (#32298)
Backport  #32208

This will result in better api clients generated out of the openapi docs for SearchIssues

---
*Sponsored by Kithara Software GmbH*
2024-10-21 08:32:34 +08:00
YR Chen
0c12252c23 Update github.com/go-enry/go-enry to v2.9.1 (#32295) (#32296)
Backport #32295

`go-enry` v2.9.1 includes latest file patterns from Linguist, which can
identify more generated file type, eg. `pdm.lock`.
2024-10-21 02:12:51 +08:00
Zettat123
99cac1f50c Always update expiration time when creating an artifact (#32281) (#32285)
Backport #32281

Fix #32256
2024-10-18 10:36:23 +08:00
a1012112796
2a99607add make show stats work when only one file changed (#32244) (#32268)
Backport #32244

fix https://github.com/go-gitea/gitea/issues/32226

in https://github.com/go-gitea/gitea/pull/27775 , it do some changes to
only show diff file tree when more than one file changed. But looks it
also break the `diff-file-list` logic, which looks not expected change.
so try fix it.

/cc @silverwind

example view:

![image](https://github.com/user-attachments/assets/281e9c4f-a269-4d36-94eb-a132058aea87)

Signed-off-by: a1012112796 <1012112796@qq.com>
2024-10-17 08:03:21 +00:00
cloudchamb3r
c1023b97aa [v1.22 backport] Fix null errors on conversation holder (#32258) (#32266) (#32282)
Backport #32266

fix #32258

Errors in the issue was due to unhandled null check. so i fixed it.

### Detailed description for Issue & Fix
To reproduce that issue, the comment must be deleted on Conversation
tab.
#### Before Delete
<img width="1032" alt="image"

src="https://github.com/user-attachments/assets/72df61ba-7db6-44c9-bebc-ca1178dd27f1">

#### After Delete (AS-IS)
<img width="1010" alt="image"

src="https://github.com/user-attachments/assets/36fa537e-4f8e-4535-8d02-e538c50f0dd8">

gitea already have remove logic for `timeline-item-group`, but because
of null ref exception the later logic that removes `timeline-item-group`
could be not be called correctly.
2024-10-17 13:34:39 +08:00
wxiaoguang
7e0fd4c208 Warn users when they try to use a non-root-url to sign in/up (#32272) (#32273) 2024-10-17 09:01:44 +08:00
wxiaoguang
db7349bc0d Make owner/repo/pulls handlers use "PR reader" permission (#32254) (#32265)
Backport #32254 (no conflict)
2024-10-15 22:32:54 +08:00
Zettat123
55562f9c79 Update scheduled tasks even if changes are pushed by "ActionsUser" (#32246) (#32252)
Backport #32246

Fix #32219

Co-authored-by: delvh <dev.lh@web.de>
2024-10-14 16:55:16 +08:00
Giteabot
24b65f122a Only rename a user when they should receive a different name (#32247) (#32249)
Backport #32247 by @lunny

Fix #31996

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2024-10-13 19:27:37 +00:00
Giteabot
bcfe1f91d2 Fix dropdown content overflow (#31610) (#32250)
Backport #31610 by charles7668

close #31602 

Co-authored-by: charles <30816317+charles7668@users.noreply.github.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2024-10-13 03:46:55 +00:00
Giteabot
f15d5f0c4a Fix checkbox bug on private/archive filter (#32236) (#32240)
Backport #32236 by cloudchamb3r

fix #32235

Co-authored-by: cloudchamb3r <jizon0123@protonmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2024-10-11 21:13:09 +08:00
160 changed files with 2372 additions and 935 deletions

View File

@@ -3,3 +3,5 @@ self-hosted-runner:
- actuated-4cpu-8gb
- actuated-4cpu-16gb
- nscloud
- namespace-profile-gitea-release-docker
- namespace-profile-gitea-release-binary

View File

@@ -12,7 +12,7 @@ jobs:
disk-clean:
uses: ./.github/workflows/disk-clean.yml
nightly-binary:
runs-on: nscloud
runs-on: namespace-profile-gitea-release-binary
steps:
- uses: actions/checkout@v4
# fetch all commits instead of only the last as some branches are long lived and could have many between versions
@@ -60,7 +60,7 @@ jobs:
run: |
aws s3 sync dist/release s3://${{ secrets.AWS_S3_BUCKET }}/gitea/${{ steps.clean_name.outputs.branch }} --no-progress
nightly-docker-rootful:
runs-on: ubuntu-latest
runs-on: namespace-profile-gitea-release-docker
steps:
- uses: actions/checkout@v4
# fetch all commits instead of only the last as some branches are long lived and could have many between versions
@@ -97,7 +97,7 @@ jobs:
push: true
tags: gitea/gitea:${{ steps.clean_name.outputs.branch }}
nightly-docker-rootless:
runs-on: ubuntu-latest
runs-on: namespace-profile-gitea-release-docker
steps:
- uses: actions/checkout@v4
# fetch all commits instead of only the last as some branches are long lived and could have many between versions

View File

@@ -11,7 +11,7 @@ concurrency:
jobs:
binary:
runs-on: nscloud
runs-on: namespace-profile-gitea-release-binary
steps:
- uses: actions/checkout@v4
# fetch all commits instead of only the last as some branches are long lived and could have many between versions
@@ -68,7 +68,7 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.RELEASE_TOKEN }}
docker-rootful:
runs-on: ubuntu-latest
runs-on: namespace-profile-gitea-release-docker
steps:
- uses: actions/checkout@v4
# fetch all commits instead of only the last as some branches are long lived and could have many between versions
@@ -99,7 +99,7 @@ jobs:
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
docker-rootless:
runs-on: ubuntu-latest
runs-on: namespace-profile-gitea-release-docker
steps:
- uses: actions/checkout@v4
# fetch all commits instead of only the last as some branches are long lived and could have many between versions

View File

@@ -13,7 +13,7 @@ concurrency:
jobs:
binary:
runs-on: nscloud
runs-on: namespace-profile-gitea-release-binary
steps:
- uses: actions/checkout@v4
# fetch all commits instead of only the last as some branches are long lived and could have many between versions
@@ -70,7 +70,7 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.RELEASE_TOKEN }}
docker-rootful:
runs-on: ubuntu-latest
runs-on: namespace-profile-gitea-release-docker
steps:
- uses: actions/checkout@v4
# fetch all commits instead of only the last as some branches are long lived and could have many between versions
@@ -105,7 +105,7 @@ jobs:
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
docker-rootless:
runs-on: ubuntu-latest
runs-on: namespace-profile-gitea-release-docker
steps:
- uses: actions/checkout@v4
# fetch all commits instead of only the last as some branches are long lived and could have many between versions

View File

@@ -4,7 +4,94 @@ This changelog goes through the changes that have been made in each release
without substantial changes to our git log; to see the highlights of what has
been added to each release, please refer to the [blog](https://blog.gitea.com).
## [1.22.3](https://github.com/go-gitea/gitea/releases/tag/1.22.3) - 2024-10-08
## [1.22.6](https://github.com/go-gitea/gitea/releases/tag/v1.22.6) - 2024-12-12
* SECURITY
* Fix misuse of PublicKeyCallback(#32810)
* BUGFIXES
* Fix lfs migration (#32812) (#32818)
* Add missing two sync feed for refs/pull (#32815)
* TESTING
* Avoid MacOS keychain dialog in integration tests (#32813) (#32816)
## [1.22.5](https://github.com/go-gitea/gitea/releases/tag/v1.22.5) - 2024-12-11
* SECURITY
* Upgrade crypto library (#32791)
* Fix delete branch perm checking (#32654) (#32707)
* BUGFIXES
* Add standard-compliant route to serve outdated R packages (#32783) (#32789)
* Fix internal server error when updating labels without write permission (#32776) (#32785)
* Add Swift login endpoint (#32693) (#32701)
* Fix fork page branch selection (#32711) (#32725)
* Fix word overflow in file search page (#32695) (#32699)
* Fix gogit `GetRefCommitID` (#32705) (#32712)
* Fix race condition in mermaid observer (#32599) (#32673)
* Fixe a keystring misuse and refactor duplicates keystrings (#32668) (#32792)
* Bump relative-time-element to v4.4.4 (#32739)
* PERFORMANCE
* Make wiki pages visit fast (#32732) (#32745)
* MISC
* Don't create action when syncing mirror pull refs (#32659) (#32664)
## [1.22.4](https://github.com/go-gitea/gitea/releases/tag/v1.22.4) - 2024-11-14
* SECURITY
* Fix basic auth with webauthn (#32531) (#32536)
* Refactor internal routers (partial backport, auth token const time comparing) (#32473) (#32479)
* PERFORMANCE
* Remove transaction for archive download (#32186) (#32520)
* BUGFIXES
* Fix `missing signature key` error when pulling Docker images with `SERVE_DIRECT` enabled (#32365) (#32397)
* Fix get reviewers fails when selecting user without pull request permissions unit (#32415) (#32616)
* Fix adding index files to tmp directory (#32360) (#32593)
* Fix PR creation on forked repositories via API (#31863) (#32591)
* Fix missing menu tabs in organization project view page (#32313) (#32592)
* Support HTTP POST requests to `/userinfo`, aligning to OpenID Core specification (#32578) (#32594)
* Fix debian package clean up cron job (#32351) (#32590)
* Fix GetInactiveUsers (#32540) (#32588)
* Allow the actions user to login via the jwt token (#32527) (#32580)
* Fix submodule parsing (#32571) (#32577)
* Refactor find forks and fix possible bugs that weaken permissions check (#32528) (#32547)
* Fix some places that don't respect org full name setting (#32243) (#32550)
* Refactor push mirror find and add check for updating push mirror (#32539) (#32549)
* Fix basic auth with webauthn (#32531) (#32536)
* Fix artifact v4 upload above 8MB (#31664) (#32523)
* Fix oauth2 error handle not return immediately (#32514) (#32516)
* Fix action not triggered when commit message is too long (#32498) (#32507)
* Fix `GetRepoLink` nil pointer dereference on dashboard feed page when repo is deleted with actions enabled (#32501) (#32502)
* Fix `missing signature key` error when pulling Docker images with `SERVE_DIRECT` enabled (#32397) (#32397)
* Fix the permission check for user search API and limit the number of returned users for `/user/search` (#32310)
* Fix SearchIssues swagger docs (#32208) (#32298)
* Fix dropdown content overflow (#31610) (#32250)
* Disable Oauth check if oauth disabled (#32368) (#32480)
* Respect renamed dependencies of Cargo registry (#32430) (#32478)
* Fix mermaid diagram height when initially hidden (#32457) (#32464)
* Fix broken releases when re-pushing tags (#32435) (#32449)
* Only provide the commit summary for Discord webhook push events (#32432) (#32447)
* Only query team tables if repository is under org when getting assignees (#32414) (#32426)
* Fix created_unix for mirroring (#32342) (#32406)
* Respect UI.ExploreDefaultSort setting again (#32357) (#32385)
* Fix broken image when editing comment with non-image attachments (#32319) (#32345)
* Fix disable 2fa bug (#32320) (#32330)
* Always update expiration time when creating an artifact (#32281) (#32285)
* Fix null errors on conversation holder (#32258) (#32266) (#32282)
* Only rename a user when they should receive a different name (#32247) (#32249)
* Fix checkbox bug on private/archive filter (#32236) (#32240)
* Add a doctor check to disable the "Actions" unit for mirrors (#32424) (#32497)
* Quick fix milestone deadline 9999 (#32423)
* Make `show stats` work when only one file changed (#32244) (#32268)
* Make `owner/repo/pulls` handlers use "PR reader" permission (#32254) (#32265)
* Update scheduled tasks even if changes are pushed by "ActionsUser" (#32246) (#32252)
* MISC
* Remove unnecessary code: `GetPushMirrorsByRepoID` called on all repo pages (#32560) (#32567)
* Improve some sanitizer rules (#32534)
* Update nix development environment vor v1.22.x (#32495)
* Add warn log when deleting inactive users (#32318) (#32321)
* Update github.com/go-enry/go-enry to v2.9.1 (#32295) (#32296)
* Warn users when they try to use a non-root-url to sign in/up (#32272) (#32273)
## [1.22.3](https://github.com/go-gitea/gitea/releases/tag/v1.22.3) - 2024-10-08
* SECURITY
* Fix bug when a token is given public only (#32204) (#32218)
@@ -45,7 +132,7 @@ been added to each release, please refer to the [blog](https://blog.gitea.com).
* Lazy load avatar images (#32051) (#32063)
* Upgrade cache to v0.2.1 (#32003) (#32009)
## [1.22.2](https://github.com/go-gitea/gitea/releases/tag/1.22.2) - 2024-08-28
## [1.22.2](https://github.com/go-gitea/gitea/releases/tag/v1.22.2) - 2024-08-28
* Security
* Replace v-html with v-text in search inputbox (#31966) (#31973)
@@ -101,7 +188,7 @@ been added to each release, please refer to the [blog](https://blog.gitea.com).
* Upgrade micromatch to 4.0.8 (#31944)
* Update webpack to 5.94.0 (#31941)
## [1.22.1](https://github.com/go-gitea/gitea/releases/tag/1.22.1) - 2024-07-04
## [1.22.1](https://github.com/go-gitea/gitea/releases/tag/v1.22.1) - 2024-07-04
* SECURITY
* Add replacement module for `mholt/archiver` (#31267) (#31270)

12
flake.lock generated
View File

@@ -5,11 +5,11 @@
"systems": "systems"
},
"locked": {
"lastModified": 1710146030,
"narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=",
"lastModified": 1726560853,
"narHash": "sha256-X6rJYSESBVr3hBoH0WbKE5KvhPU5bloyZ2L4K60/fPQ=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a",
"rev": "c1dfcf08411b08f6b8615f7d8971a2bfa81d5e8a",
"type": "github"
},
"original": {
@@ -20,11 +20,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1715534503,
"narHash": "sha256-5ZSVkFadZbFP1THataCaSf0JH2cAH3S29hU9rrxTEqk=",
"lastModified": 1731139594,
"narHash": "sha256-IigrKK3vYRpUu+HEjPL/phrfh7Ox881er1UEsZvw9Q4=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "2057814051972fa1453ddfb0d98badbea9b83c06",
"rev": "76612b17c0ce71689921ca12d9ffdc9c23ce40b2",
"type": "github"
},
"original": {

14
go.mod
View File

@@ -34,13 +34,13 @@ require (
github.com/ethantkoenig/rupture v1.0.1
github.com/felixge/fgprof v0.9.4
github.com/fsnotify/fsnotify v1.7.0
github.com/gliderlabs/ssh v0.3.6
github.com/gliderlabs/ssh v0.3.8
github.com/go-ap/activitypub v0.0.0-20240316125321-b61fd6a83225
github.com/go-ap/jsonld v0.0.0-20221030091449-f2a191312c73
github.com/go-chi/chi/v5 v5.0.12
github.com/go-chi/cors v1.2.1
github.com/go-co-op/gocron v1.37.0
github.com/go-enry/go-enry/v2 v2.8.7
github.com/go-enry/go-enry/v2 v2.9.1
github.com/go-fed/httpsig v1.1.1-0.20201223112313-55836744818e
github.com/go-git/go-billy/v5 v5.5.0
github.com/go-git/go-git/v5 v5.11.0
@@ -104,12 +104,12 @@ require (
github.com/yuin/goldmark v1.7.0
github.com/yuin/goldmark-highlighting/v2 v2.0.0-20230729083705-37449abec8cc
github.com/yuin/goldmark-meta v1.1.0
golang.org/x/crypto v0.26.0
golang.org/x/crypto v0.31.0
golang.org/x/image v0.18.0
golang.org/x/net v0.28.0
golang.org/x/net v0.33.0
golang.org/x/oauth2 v0.21.0
golang.org/x/sys v0.24.0
golang.org/x/text v0.17.0
golang.org/x/sys v0.28.0
golang.org/x/text v0.21.0
golang.org/x/tools v0.24.0
google.golang.org/grpc v1.62.1
google.golang.org/protobuf v1.33.0
@@ -291,7 +291,7 @@ require (
go.uber.org/zap v1.27.0 // indirect
golang.org/x/exp v0.0.0-20240314144324-c7f7c6466f7f // indirect
golang.org/x/mod v0.20.0 // indirect
golang.org/x/sync v0.8.0 // indirect
golang.org/x/sync v0.10.0 // indirect
golang.org/x/time v0.5.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240314234333-6e1732d8331c // indirect
gopkg.in/alexcesaro/quotedprintable.v3 v3.0.0-20150716171945-2caba252f4dc // indirect

32
go.sum
View File

@@ -269,8 +269,8 @@ github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nos
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
github.com/fxamacker/cbor/v2 v2.6.0 h1:sU6J2usfADwWlYDAFhZBQ6TnLFBHxgesMrQfQgk1tWA=
github.com/fxamacker/cbor/v2 v2.6.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ=
github.com/gliderlabs/ssh v0.3.6 h1:ZzjlDa05TcFRICb3anf/dSPN3ewz1Zx6CMLPWgkm3b8=
github.com/gliderlabs/ssh v0.3.6/go.mod h1:zpHEXBstFnQYtGnB8k8kQLol82umzn/2/snG7alWVD8=
github.com/gliderlabs/ssh v0.3.8 h1:a4YXD1V7xMF9g5nTkdfnja3Sxy1PVDCj1Zg4Wb8vY6c=
github.com/gliderlabs/ssh v0.3.8/go.mod h1:xYoytBv1sV0aL3CavoDuJIQNURXkkfPA/wxQ1pL1fAU=
github.com/glycerine/go-unsnap-stream v0.0.0-20181221182339-f9677308dec2/go.mod h1:/20jfyN9Y5QPEAprSgKAUr+glWDY39ZiUEAYOEv5dsE=
github.com/glycerine/goconvey v0.0.0-20190410193231-58a59202ab31/go.mod h1:Ogl1Tioa0aV7gstGFO7KhffUsb9M4ydbEbbxpcEDc24=
github.com/go-ap/activitypub v0.0.0-20240316125321-b61fd6a83225 h1:OoM81OclgRX7CUch4M7MmsH0NcmLWpFiSn7rhs6Y5ZU=
@@ -288,8 +288,8 @@ github.com/go-chi/cors v1.2.1 h1:xEC8UT3Rlp2QuWNEr4Fs/c2EAGVKBwy/1vHx3bppil4=
github.com/go-chi/cors v1.2.1/go.mod h1:sSbTewc+6wYHBBCW7ytsFSn836hqM7JxpglAy2Vzc58=
github.com/go-co-op/gocron v1.37.0 h1:ZYDJGtQ4OMhTLKOKMIch+/CY70Brbb1dGdooLEhh7b0=
github.com/go-co-op/gocron v1.37.0/go.mod h1:3L/n6BkO7ABj+TrfSVXLRzsP26zmikL4ISkLQ0O8iNY=
github.com/go-enry/go-enry/v2 v2.8.7 h1:vbab0pcf5Yo1cHQLzbWZ+QomUh3EfEU8EiR5n7W0lnQ=
github.com/go-enry/go-enry/v2 v2.8.7/go.mod h1:9yrj4ES1YrbNb1Wb7/PWYr2bpaCXUGRt0uafN0ISyG8=
github.com/go-enry/go-enry/v2 v2.9.1 h1:G9iDteJ/Mc0F4Di5NeQknf83R2OkRbwY9cAYmcqVG6U=
github.com/go-enry/go-enry/v2 v2.9.1/go.mod h1:9yrj4ES1YrbNb1Wb7/PWYr2bpaCXUGRt0uafN0ISyG8=
github.com/go-enry/go-oniguruma v1.2.1 h1:k8aAMuJfMrqm/56SG2lV9Cfti6tC4x8673aHCcBk+eo=
github.com/go-enry/go-oniguruma v1.2.1/go.mod h1:bWDhYP+S6xZQgiRL7wlTScFYBe023B6ilRZbCAD5Hf4=
github.com/go-faster/city v1.0.1 h1:4WAxSZ3V2Ws4QRDrscLEDcibJY8uf41H6AhXDrNDcGw=
@@ -835,8 +835,8 @@ golang.org/x/crypto v0.3.1-0.20221117191849-2c476679df9a/go.mod h1:hebNnKkNXi2Uz
golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU=
golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=
golang.org/x/crypto v0.17.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4=
golang.org/x/crypto v0.26.0 h1:RrRspgV4mU+YwB4FYnuBoKsUapNIL5cohGAmSH3azsw=
golang.org/x/crypto v0.26.0/go.mod h1:GY7jblb9wI+FOo5y8/S2oY4zWP07AkOJ4+jxCqdqn54=
golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/exp v0.0.0-20240314144324-c7f7c6466f7f h1:3CW0unweImhOzd5FmYuRsD4Y4oQFKZIjAnKbjV4WIrw=
golang.org/x/exp v0.0.0-20240314144324-c7f7c6466f7f/go.mod h1:CxmFvTBINI24O/j8iY7H1xHzx2i4OsyguNBmN/uPtqc=
golang.org/x/image v0.18.0 h1:jGzIakQa/ZXI1I0Fxvaa9W7yP25TqT6cHIHn+6CqvSQ=
@@ -866,8 +866,8 @@ golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc=
golang.org/x/net v0.9.0/go.mod h1:d48xBJpPfHeWQsugry2m+kC02ZBRGRgulfHnEXEuWns=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.28.0 h1:a9JDOJc5GMUJ0+UDqmLT86WiEy7iWyIhz8gz8E4e5hE=
golang.org/x/net v0.28.0/go.mod h1:yqtgsTWOOnlGLG9GFRrK3++bGOUEkNBoHZc8MEDWPNg=
golang.org/x/net v0.33.0 h1:74SYHlV8BIgHIFC/LrYkOGIwL19eTYXQ5wc6TBuO36I=
golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
golang.org/x/oauth2 v0.21.0 h1:tsimM75w1tF/uws5rbeHzIWxEqElMehnc+iW793zsZs=
golang.org/x/oauth2 v0.21.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -878,8 +878,8 @@ golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ=
golang.org/x/sync v0.8.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181221143128-b4a75ba826a6/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -915,8 +915,8 @@ golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.24.0 h1:Twjiwq9dn6R1fQcyiK+wQyHWfaz/BJB+YIpzU/Cv3Xg=
golang.org/x/sys v0.24.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc=
@@ -926,8 +926,8 @@ golang.org/x/term v0.7.0/go.mod h1:P32HKFT3hSsZrRxla30E9HqToFYAQPCMs/zFMBUFqPY=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
golang.org/x/term v0.15.0/go.mod h1:BDl952bC7+uMoWR75FIrCDx79TPU9oHkTZ9yRbYOrX0=
golang.org/x/term v0.23.0 h1:F6D4vR+EHoL9/sWAWgAR1H2DcHr4PareCbAaCo1RpuU=
golang.org/x/term v0.23.0/go.mod h1:DgV24QBUrK6jhZXl+20l6UWznPlwAHm1Q1mGHtydmSk=
golang.org/x/term v0.27.0 h1:WP60Sv1nlK1T6SupCHbXzSaN0b9wUmsPoRS9b61A23Q=
golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
@@ -938,8 +938,8 @@ golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.17.0 h1:XtiM5bkSOt+ewxlOE/aE/AKEHibwj/6gvWMl9Rsh0Qc=
golang.org/x/text v0.17.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=

View File

@@ -69,7 +69,7 @@ func CreateArtifact(ctx context.Context, t *ActionTask, artifactName, artifactPa
OwnerID: t.OwnerID,
CommitSHA: t.CommitSHA,
Status: int64(ArtifactStatusUploadPending),
ExpiredUnix: timeutil.TimeStamp(time.Now().Unix() + 3600*24*expiredDays),
ExpiredUnix: timeutil.TimeStamp(time.Now().Unix() + timeutil.Day*expiredDays),
}
if _, err := db.GetEngine(ctx).Insert(artifact); err != nil {
return nil, err
@@ -78,6 +78,13 @@ func CreateArtifact(ctx context.Context, t *ActionTask, artifactName, artifactPa
} else if err != nil {
return nil, err
}
if _, err := db.GetEngine(ctx).ID(artifact.ID).Cols("expired_unix").Update(&ActionArtifact{
ExpiredUnix: timeutil.TimeStamp(time.Now().Unix() + timeutil.Day*expiredDays),
}); err != nil {
return nil, err
}
return artifact, nil
}

View File

@@ -261,6 +261,7 @@ func CancelPreviousJobs(ctx context.Context, repoID int64, ref, workflowID strin
}
// InsertRun inserts a run
// The title will be cut off at 255 characters if it's longer than 255 characters.
func InsertRun(ctx context.Context, run *ActionRun, jobs []*jobparser.SingleWorkflow) error {
ctx, committer, err := db.TxContext(ctx)
if err != nil {
@@ -273,6 +274,7 @@ func InsertRun(ctx context.Context, run *ActionRun, jobs []*jobparser.SingleWork
return err
}
run.Index = index
run.Title, _ = util.SplitStringAtByteN(run.Title, 255)
if err := db.Insert(ctx, run); err != nil {
return err
@@ -386,6 +388,7 @@ func UpdateRun(ctx context.Context, run *ActionRun, cols ...string) error {
if len(cols) > 0 {
sess.Cols(cols...)
}
run.Title, _ = util.SplitStringAtByteN(run.Title, 255)
affected, err := sess.Update(run)
if err != nil {
return err

View File

@@ -242,6 +242,7 @@ func GetRunnerByID(ctx context.Context, id int64) (*ActionRunner, error) {
// UpdateRunner updates runner's information.
func UpdateRunner(ctx context.Context, r *ActionRunner, cols ...string) error {
e := db.GetEngine(ctx)
r.Name, _ = util.SplitStringAtByteN(r.Name, 255)
var err error
if len(cols) == 0 {
_, err = e.ID(r.ID).AllCols().Update(r)
@@ -263,6 +264,7 @@ func DeleteRunner(ctx context.Context, id int64) error {
// CreateRunner creates new runner.
func CreateRunner(ctx context.Context, t *ActionRunner) error {
t.Name, _ = util.SplitStringAtByteN(t.Name, 255)
return db.Insert(ctx, t)
}

View File

@@ -12,6 +12,7 @@ import (
repo_model "code.gitea.io/gitea/models/repo"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/timeutil"
"code.gitea.io/gitea/modules/util"
webhook_module "code.gitea.io/gitea/modules/webhook"
"github.com/robfig/cron/v3"
@@ -71,6 +72,7 @@ func CreateScheduleTask(ctx context.Context, rows []*ActionSchedule) error {
// Loop through each schedule row
for _, row := range rows {
row.Title, _ = util.SplitStringAtByteN(row.Title, 255)
// Create new schedule row
if err = db.Insert(ctx, row); err != nil {
return err

View File

@@ -248,6 +248,9 @@ func (a *Action) GetActDisplayNameTitle(ctx context.Context) string {
// GetRepoUserName returns the name of the action repository owner.
func (a *Action) GetRepoUserName(ctx context.Context) string {
a.loadRepo(ctx)
if a.Repo == nil {
return "(non-existing-repo)"
}
return a.Repo.OwnerName
}
@@ -260,6 +263,9 @@ func (a *Action) ShortRepoUserName(ctx context.Context) string {
// GetRepoName returns the name of the action repository.
func (a *Action) GetRepoName(ctx context.Context) string {
a.loadRepo(ctx)
if a.Repo == nil {
return "(non-existing-repo)"
}
return a.Repo.Name
}

View File

@@ -68,7 +68,8 @@ func CheckCollations(x *xorm.Engine) (*CheckCollationsResult, error) {
var candidateCollations []string
if x.Dialect().URI().DBType == schemas.MYSQL {
if _, err = x.SQL("SELECT @@collation_database").Get(&res.DatabaseCollation); err != nil {
_, err = x.SQL("SELECT DEFAULT_COLLATION_NAME FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = ?", setting.Database.Name).Get(&res.DatabaseCollation)
if err != nil {
return nil, err
}
res.IsCollationCaseSensitive = func(s string) bool {

View File

@@ -1,3 +1,22 @@
-
id: 46
attempt: 3
runner_id: 1
status: 3 # 3 is the status code for "cancelled"
started: 1683636528
stopped: 1683636626
repo_id: 4
owner_id: 1
commit_sha: c2d72f548424103f01ee1dc02889c1e2bff816b0
is_fork_pull_request: 0
token_hash: 6d8ef48297195edcc8e22c70b3020eaa06c52976db67d39b4260c64a69a2cc1508825121b7b8394e48e00b1bf8718b2aaaaa
token_salt: eeeeeeee
token_last_eight: eeeeeeee
log_filename: artifact-test2/2f/47.log
log_in_storage: 1
log_length: 707
log_size: 90179
log_expired: 0
-
id: 47
job_id: 192

View File

@@ -332,6 +332,7 @@
repo_admin_change_team_access: false
theme: ""
keep_activity_private: false
created_unix: 1730468968
-
id: 10

View File

@@ -21,6 +21,7 @@ import (
"code.gitea.io/gitea/modules/references"
api "code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/timeutil"
"code.gitea.io/gitea/modules/util"
"xorm.io/builder"
)
@@ -138,6 +139,7 @@ func ChangeIssueTitle(ctx context.Context, issue *Issue, doer *user_model.User,
}
defer committer.Close()
issue.Title, _ = util.SplitStringAtByteN(issue.Title, 255)
if err = UpdateIssueCols(ctx, issue, "name"); err != nil {
return fmt.Errorf("updateIssueCols: %w", err)
}
@@ -381,6 +383,7 @@ func NewIssueWithIndex(ctx context.Context, doer *user_model.User, opts NewIssue
}
// NewIssue creates new issue with labels for repository.
// The title will be cut off at 255 characters if it's longer than 255 characters.
func NewIssue(ctx context.Context, repo *repo_model.Repository, issue *Issue, labelIDs []int64, uuids []string) (err error) {
ctx, committer, err := db.TxContext(ctx)
if err != nil {
@@ -394,6 +397,7 @@ func NewIssue(ctx context.Context, repo *repo_model.Repository, issue *Issue, la
}
issue.Index = idx
issue.Title, _ = util.SplitStringAtByteN(issue.Title, 255)
if err = NewIssueWithIndex(ctx, issue.Poster, NewIssueOptions{
Repo: repo,

View File

@@ -84,7 +84,7 @@ func (m *Milestone) BeforeUpdate() {
// this object.
func (m *Milestone) AfterLoad() {
m.NumOpenIssues = m.NumIssues - m.NumClosedIssues
if m.DeadlineUnix.Year() == 9999 {
if m.DeadlineUnix.Year() >= 9999 {
return
}

View File

@@ -545,6 +545,7 @@ func NewPullRequest(ctx context.Context, repo *repo_model.Repository, issue *Iss
}
issue.Index = idx
issue.Title, _ = util.SplitStringAtByteN(issue.Title, 255)
if err = NewIssueWithIndex(ctx, issue.Poster, NewIssueOptions{
Repo: repo,

View File

@@ -12,6 +12,7 @@ import (
"code.gitea.io/gitea/modules/git"
giturl "code.gitea.io/gitea/modules/git/url"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/util"
"xorm.io/xorm"
)
@@ -163,7 +164,9 @@ func migratePushMirrors(x *xorm.Engine) error {
func getRemoteAddress(ownerName, repoName, remoteName string) (string, error) {
repoPath := filepath.Join(setting.RepoRootPath, strings.ToLower(ownerName), strings.ToLower(repoName)+".git")
if exist, _ := util.IsExist(repoPath); !exist {
return "", nil
}
remoteURL, err := git.GetRemoteAddress(context.Background(), repoPath, remoteName)
if err != nil {
return "", fmt.Errorf("get remote %s's address of %s/%s failed: %v", remoteName, ownerName, repoName, err)

View File

@@ -9,6 +9,7 @@ import (
"code.gitea.io/gitea/models/db"
"code.gitea.io/gitea/models/perm"
repo_model "code.gitea.io/gitea/models/repo"
"code.gitea.io/gitea/models/unit"
"xorm.io/builder"
)
@@ -83,3 +84,16 @@ func GetTeamsWithAccessToRepo(ctx context.Context, orgID, repoID int64, mode per
OrderBy("name").
Find(&teams)
}
// GetTeamsWithAccessToRepoUnit returns all teams in an organization that have given access level to the repository special unit.
func GetTeamsWithAccessToRepoUnit(ctx context.Context, orgID, repoID int64, mode perm.AccessMode, unitType unit.Type) ([]*Team, error) {
teams := make([]*Team, 0, 5)
return teams, db.GetEngine(ctx).Where("team_unit.access_mode >= ?", mode).
Join("INNER", "team_repo", "team_repo.team_id = team.id").
Join("INNER", "team_unit", "team_unit.team_id = team.id").
And("team_repo.org_id = ?", orgID).
And("team_repo.repo_id = ?", repoID).
And("team_unit.type = ?", unitType).
OrderBy("name").
Find(&teams)
}

View File

@@ -0,0 +1,31 @@
// Copyright 2024 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package organization_test
import (
"testing"
"code.gitea.io/gitea/models/db"
"code.gitea.io/gitea/models/organization"
"code.gitea.io/gitea/models/perm"
"code.gitea.io/gitea/models/repo"
"code.gitea.io/gitea/models/unit"
"code.gitea.io/gitea/models/unittest"
"github.com/stretchr/testify/assert"
)
func TestGetTeamsWithAccessToRepoUnit(t *testing.T) {
assert.NoError(t, unittest.PrepareTestDatabase())
org41 := unittest.AssertExistsAndLoadBean(t, &organization.Organization{ID: 41})
repo61 := unittest.AssertExistsAndLoadBean(t, &repo.Repository{ID: 61})
teams, err := organization.GetTeamsWithAccessToRepoUnit(db.DefaultContext, org41.ID, repo61.ID, perm.AccessModeRead, unit.TypePullRequests)
assert.NoError(t, err)
if assert.Len(t, teams, 2) {
assert.EqualValues(t, 21, teams[0].ID)
assert.EqualValues(t, 22, teams[1].ID)
}
}

View File

@@ -75,26 +75,27 @@ func ExistPackages(ctx context.Context, opts *PackageSearchOptions) (bool, error
}
// SearchPackages gets the packages matching the search options
func SearchPackages(ctx context.Context, opts *PackageSearchOptions, iter func(*packages.PackageFileDescriptor)) error {
return db.GetEngine(ctx).
func SearchPackages(ctx context.Context, opts *PackageSearchOptions) ([]*packages.PackageFileDescriptor, error) {
var pkgFiles []*packages.PackageFile
err := db.GetEngine(ctx).
Table("package_file").
Select("package_file.*").
Join("INNER", "package_version", "package_version.id = package_file.version_id").
Join("INNER", "package", "package.id = package_version.package_id").
Where(opts.toCond()).
Asc("package.lower_name", "package_version.created_unix").
Iterate(new(packages.PackageFile), func(_ int, bean any) error {
pf := bean.(*packages.PackageFile)
pfd, err := packages.GetPackageFileDescriptor(ctx, pf)
if err != nil {
return err
}
iter(pfd)
return nil
})
Asc("package.lower_name", "package_version.created_unix").Find(&pkgFiles)
if err != nil {
return nil, err
}
pfds := make([]*packages.PackageFileDescriptor, 0, len(pkgFiles))
for _, pf := range pkgFiles {
pfd, err := packages.GetPackageFileDescriptor(ctx, pf)
if err != nil {
return nil, err
}
pfds = append(pfds, pfd)
}
return pfds, nil
}
// GetDistributions gets all available distributions

View File

@@ -257,6 +257,7 @@ func GetSearchOrderByBySortType(sortType string) db.SearchOrderBy {
}
// NewProject creates a new Project
// The title will be cut off at 255 characters if it's longer than 255 characters.
func NewProject(ctx context.Context, p *Project) error {
if !IsBoardTypeValid(p.BoardType) {
p.BoardType = BoardTypeNone
@@ -276,6 +277,8 @@ func NewProject(ctx context.Context, p *Project) error {
}
defer committer.Close()
p.Title, _ = util.SplitStringAtByteN(p.Title, 255)
if err := db.Insert(ctx, p); err != nil {
return err
}
@@ -331,6 +334,7 @@ func UpdateProject(ctx context.Context, p *Project) error {
p.CardType = CardTypeTextOnly
}
p.Title, _ = util.SplitStringAtByteN(p.Title, 255)
_, err := db.GetEngine(ctx).ID(p.ID).Cols(
"title",
"description",

View File

@@ -54,21 +54,6 @@ func GetUserFork(ctx context.Context, repoID, userID int64) (*Repository, error)
return &forkedRepo, nil
}
// GetForks returns all the forks of the repository
func GetForks(ctx context.Context, repo *Repository, listOptions db.ListOptions) ([]*Repository, error) {
sess := db.GetEngine(ctx)
var forks []*Repository
if listOptions.Page == 0 {
forks = make([]*Repository, 0, repo.NumForks)
} else {
forks = make([]*Repository, 0, listOptions.PageSize)
sess = db.SetSessionPagination(sess, &listOptions)
}
return forks, sess.Find(&forks, &Repository{ForkID: repo.ID})
}
// IncrementRepoForkNum increment repository fork number
func IncrementRepoForkNum(ctx context.Context, repoID int64) error {
_, err := db.GetEngine(ctx).Exec("UPDATE `repository` SET num_forks=num_forks+1 WHERE id=?", repoID)

View File

@@ -9,15 +9,13 @@ import (
"code.gitea.io/gitea/models/db"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/optional"
"code.gitea.io/gitea/modules/timeutil"
"code.gitea.io/gitea/modules/util"
"xorm.io/builder"
)
// ErrPushMirrorNotExist mirror does not exist error
var ErrPushMirrorNotExist = util.NewNotExistErrorf("PushMirror does not exist")
// PushMirror represents mirror information of a repository.
type PushMirror struct {
ID int64 `xorm:"pk autoincr"`
@@ -96,26 +94,46 @@ func DeletePushMirrors(ctx context.Context, opts PushMirrorOptions) error {
return util.NewInvalidArgumentErrorf("repoID required and must be set")
}
type findPushMirrorOptions struct {
db.ListOptions
RepoID int64
SyncOnCommit optional.Option[bool]
}
func (opts findPushMirrorOptions) ToConds() builder.Cond {
cond := builder.NewCond()
if opts.RepoID > 0 {
cond = cond.And(builder.Eq{"repo_id": opts.RepoID})
}
if opts.SyncOnCommit.Has() {
cond = cond.And(builder.Eq{"sync_on_commit": opts.SyncOnCommit.Value()})
}
return cond
}
// GetPushMirrorsByRepoID returns push-mirror information of a repository.
func GetPushMirrorsByRepoID(ctx context.Context, repoID int64, listOptions db.ListOptions) ([]*PushMirror, int64, error) {
sess := db.GetEngine(ctx).Where("repo_id = ?", repoID)
if listOptions.Page != 0 {
sess = db.SetSessionPagination(sess, &listOptions)
mirrors := make([]*PushMirror, 0, listOptions.PageSize)
count, err := sess.FindAndCount(&mirrors)
return mirrors, count, err
return db.FindAndCount[PushMirror](ctx, findPushMirrorOptions{
ListOptions: listOptions,
RepoID: repoID,
})
}
func GetPushMirrorByIDAndRepoID(ctx context.Context, id, repoID int64) (*PushMirror, bool, error) {
var pushMirror PushMirror
has, err := db.GetEngine(ctx).Where("id = ?", id).And("repo_id = ?", repoID).Get(&pushMirror)
if !has || err != nil {
return nil, has, err
}
mirrors := make([]*PushMirror, 0, 10)
count, err := sess.FindAndCount(&mirrors)
return mirrors, count, err
return &pushMirror, true, nil
}
// GetPushMirrorsSyncedOnCommit returns push-mirrors for this repo that should be updated by new commits
func GetPushMirrorsSyncedOnCommit(ctx context.Context, repoID int64) ([]*PushMirror, error) {
mirrors := make([]*PushMirror, 0, 10)
return mirrors, db.GetEngine(ctx).
Where("repo_id = ? AND sync_on_commit = ?", repoID, true).
Find(&mirrors)
return db.Find[PushMirror](ctx, findPushMirrorOptions{
RepoID: repoID,
SyncOnCommit: optional.Some(true),
})
}
// PushMirrorsIterate iterates all push-mirror repositories.

View File

@@ -156,6 +156,7 @@ func IsReleaseExist(ctx context.Context, repoID int64, tagName string) (bool, er
// UpdateRelease updates all columns of a release
func UpdateRelease(ctx context.Context, rel *Release) error {
rel.Title, _ = util.SplitStringAtByteN(rel.Title, 255)
_, err := db.GetEngine(ctx).ID(rel.ID).AllCols().Update(rel)
return err
}

View File

@@ -98,8 +98,7 @@ func (repos RepositoryList) IDs() []int64 {
return repoIDs
}
// LoadAttributes loads the attributes for the given RepositoryList
func (repos RepositoryList) LoadAttributes(ctx context.Context) error {
func (repos RepositoryList) LoadOwners(ctx context.Context) error {
if len(repos) == 0 {
return nil
}
@@ -107,10 +106,6 @@ func (repos RepositoryList) LoadAttributes(ctx context.Context) error {
userIDs := container.FilterSlice(repos, func(repo *Repository) (int64, bool) {
return repo.OwnerID, true
})
repoIDs := make([]int64, len(repos))
for i := range repos {
repoIDs[i] = repos[i].ID
}
// Load owners.
users := make(map[int64]*user_model.User, len(userIDs))
@@ -123,12 +118,19 @@ func (repos RepositoryList) LoadAttributes(ctx context.Context) error {
for i := range repos {
repos[i].Owner = users[repos[i].OwnerID]
}
return nil
}
func (repos RepositoryList) LoadLanguageStats(ctx context.Context) error {
if len(repos) == 0 {
return nil
}
// Load primary language.
stats := make(LanguageStatList, 0, len(repos))
if err := db.GetEngine(ctx).
Where("`is_primary` = ? AND `language` != ?", true, "other").
In("`repo_id`", repoIDs).
In("`repo_id`", repos.IDs()).
Find(&stats); err != nil {
return fmt.Errorf("find primary languages: %w", err)
}
@@ -141,10 +143,18 @@ func (repos RepositoryList) LoadAttributes(ctx context.Context) error {
}
}
}
return nil
}
// LoadAttributes loads the attributes for the given RepositoryList
func (repos RepositoryList) LoadAttributes(ctx context.Context) error {
if err := repos.LoadOwners(ctx); err != nil {
return err
}
return repos.LoadLanguageStats(ctx)
}
// SearchRepoOptions holds the search options
type SearchRepoOptions struct {
db.ListOptions

View File

@@ -11,7 +11,6 @@ import (
"code.gitea.io/gitea/models/unit"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/container"
api "code.gitea.io/gitea/modules/structs"
"xorm.io/builder"
)
@@ -110,26 +109,28 @@ func GetRepoAssignees(ctx context.Context, repo *Repository) (_ []*user_model.Us
return nil, err
}
additionalUserIDs := make([]int64, 0, 10)
if err = e.Table("team_user").
Join("INNER", "team_repo", "`team_repo`.team_id = `team_user`.team_id").
Join("INNER", "team_unit", "`team_unit`.team_id = `team_user`.team_id").
Where("`team_repo`.repo_id = ? AND (`team_unit`.access_mode >= ? OR (`team_unit`.access_mode = ? AND `team_unit`.`type` = ?))",
repo.ID, perm.AccessModeWrite, perm.AccessModeRead, unit.TypePullRequests).
Distinct("`team_user`.uid").
Select("`team_user`.uid").
Find(&additionalUserIDs); err != nil {
return nil, err
}
uniqueUserIDs := make(container.Set[int64])
uniqueUserIDs.AddMultiple(userIDs...)
uniqueUserIDs.AddMultiple(additionalUserIDs...)
if repo.Owner.IsOrganization() {
additionalUserIDs := make([]int64, 0, 10)
if err = e.Table("team_user").
Join("INNER", "team_repo", "`team_repo`.team_id = `team_user`.team_id").
Join("INNER", "team_unit", "`team_unit`.team_id = `team_user`.team_id").
Where("`team_repo`.repo_id = ? AND (`team_unit`.access_mode >= ? OR (`team_unit`.access_mode = ? AND `team_unit`.`type` = ?))",
repo.ID, perm.AccessModeWrite, perm.AccessModeRead, unit.TypePullRequests).
Distinct("`team_user`.uid").
Select("`team_user`.uid").
Find(&additionalUserIDs); err != nil {
return nil, err
}
uniqueUserIDs.AddMultiple(additionalUserIDs...)
}
// Leave a seat for owner itself to append later, but if owner is an organization
// and just waste 1 unit is cheaper than re-allocate memory once.
users := make([]*user_model.User, 0, len(uniqueUserIDs)+1)
if len(userIDs) > 0 {
if len(uniqueUserIDs) > 0 {
if err = e.In("id", uniqueUserIDs.Values()).
Where(builder.Eq{"`user`.is_active": true}).
OrderBy(user_model.GetOrderByName()).
@@ -144,57 +145,6 @@ func GetRepoAssignees(ctx context.Context, repo *Repository) (_ []*user_model.Us
return users, nil
}
// GetReviewers get all users can be requested to review:
// * for private repositories this returns all users that have read access or higher to the repository.
// * for public repositories this returns all users that have read access or higher to the repository,
// all repo watchers and all organization members.
// TODO: may be we should have a busy choice for users to block review request to them.
func GetReviewers(ctx context.Context, repo *Repository, doerID, posterID int64) ([]*user_model.User, error) {
// Get the owner of the repository - this often already pre-cached and if so saves complexity for the following queries
if err := repo.LoadOwner(ctx); err != nil {
return nil, err
}
cond := builder.And(builder.Neq{"`user`.id": posterID}).
And(builder.Eq{"`user`.is_active": true})
if repo.IsPrivate || repo.Owner.Visibility == api.VisibleTypePrivate {
// This a private repository:
// Anyone who can read the repository is a requestable reviewer
cond = cond.And(builder.In("`user`.id",
builder.Select("user_id").From("access").Where(
builder.Eq{"repo_id": repo.ID}.
And(builder.Gte{"mode": perm.AccessModeRead}),
),
))
if repo.Owner.Type == user_model.UserTypeIndividual && repo.Owner.ID != posterID {
// as private *user* repos don't generate an entry in the `access` table,
// the owner of a private repo needs to be explicitly added.
cond = cond.Or(builder.Eq{"`user`.id": repo.Owner.ID})
}
} else {
// This is a "public" repository:
// Any user that has read access, is a watcher or organization member can be requested to review
cond = cond.And(builder.And(builder.In("`user`.id",
builder.Select("user_id").From("access").
Where(builder.Eq{"repo_id": repo.ID}.
And(builder.Gte{"mode": perm.AccessModeRead})),
).Or(builder.In("`user`.id",
builder.Select("user_id").From("watch").
Where(builder.Eq{"repo_id": repo.ID}.
And(builder.In("mode", WatchModeNormal, WatchModeAuto))),
).Or(builder.In("`user`.id",
builder.Select("uid").From("org_user").
Where(builder.Eq{"org_id": repo.OwnerID}),
)))))
}
users := make([]*user_model.User, 0, 8)
return users, db.GetEngine(ctx).Where(cond).OrderBy(user_model.GetOrderByName()).Find(&users)
}
// GetIssuePostersWithSearch returns users with limit of 30 whose username started with prefix that have authored an issue/pull request for the given repository
// If isShowFullName is set to true, also include full name prefix search
func GetIssuePostersWithSearch(ctx context.Context, repo *Repository, isPull bool, search string, isShowFullName bool) ([]*user_model.User, error) {

View File

@@ -38,46 +38,3 @@ func TestRepoAssignees(t *testing.T) {
assert.NotContains(t, []int64{users[0].ID, users[1].ID, users[2].ID}, 15)
}
}
func TestRepoGetReviewers(t *testing.T) {
assert.NoError(t, unittest.PrepareTestDatabase())
// test public repo
repo1 := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 1})
ctx := db.DefaultContext
reviewers, err := repo_model.GetReviewers(ctx, repo1, 2, 2)
assert.NoError(t, err)
if assert.Len(t, reviewers, 3) {
assert.ElementsMatch(t, []int64{1, 4, 11}, []int64{reviewers[0].ID, reviewers[1].ID, reviewers[2].ID})
}
// should include doer if doer is not PR poster.
reviewers, err = repo_model.GetReviewers(ctx, repo1, 11, 2)
assert.NoError(t, err)
assert.Len(t, reviewers, 3)
// should not include PR poster, if PR poster would be otherwise eligible
reviewers, err = repo_model.GetReviewers(ctx, repo1, 11, 4)
assert.NoError(t, err)
assert.Len(t, reviewers, 2)
// test private user repo
repo2 := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 2})
reviewers, err = repo_model.GetReviewers(ctx, repo2, 2, 4)
assert.NoError(t, err)
assert.Len(t, reviewers, 1)
assert.EqualValues(t, reviewers[0].ID, 2)
// test private org repo
repo3 := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 3})
reviewers, err = repo_model.GetReviewers(ctx, repo3, 2, 1)
assert.NoError(t, err)
assert.Len(t, reviewers, 2)
reviewers, err = repo_model.GetReviewers(ctx, repo3, 2, 2)
assert.NoError(t, err)
assert.Len(t, reviewers, 1)
}

View File

@@ -46,19 +46,19 @@ const (
UserTypeIndividual UserType = iota // Historic reason to make it starts at 0.
// UserTypeOrganization defines an organization
UserTypeOrganization
UserTypeOrganization // 1
// UserTypeUserReserved reserves a (non-existing) user, i.e. to prevent a spam user from re-registering after being deleted, or to reserve the name until the user is actually created later on
UserTypeUserReserved
UserTypeUserReserved // 2
// UserTypeOrganizationReserved reserves a (non-existing) organization, to be used in combination with UserTypeUserReserved
UserTypeOrganizationReserved
UserTypeOrganizationReserved // 3
// UserTypeBot defines a bot user
UserTypeBot
UserTypeBot // 4
// UserTypeRemoteUser defines a remote user for federated users
UserTypeRemoteUser
UserTypeRemoteUser // 5
)
const (
@@ -829,7 +829,13 @@ func UpdateUserCols(ctx context.Context, u *User, cols ...string) error {
// GetInactiveUsers gets all inactive users
func GetInactiveUsers(ctx context.Context, olderThan time.Duration) ([]*User, error) {
var cond builder.Cond = builder.Eq{"is_active": false}
cond := builder.And(
builder.Eq{"is_active": false},
builder.Or( // only plain user
builder.Eq{"`type`": UserTypeIndividual},
builder.Eq{"`type`": UserTypeUserReserved},
),
)
if olderThan > 0 {
cond = cond.And(builder.Lt{"created_unix": time.Now().Add(-olderThan).Unix()})

View File

@@ -562,3 +562,17 @@ func TestDisabledUserFeatures(t *testing.T) {
assert.True(t, user_model.IsFeatureDisabledWithLoginType(user, f))
}
}
func TestGetInactiveUsers(t *testing.T) {
assert.NoError(t, unittest.PrepareTestDatabase())
// all inactive users
// user1's createdunix is 1730468968
users, err := user_model.GetInactiveUsers(db.DefaultContext, 0)
assert.NoError(t, err)
assert.Len(t, users, 1)
interval := time.Now().Unix() - 1730468968 + 3600*24
users, err = user_model.GetInactiveUsers(db.DefaultContext, time.Duration(interval*int64(time.Second)))
assert.NoError(t, err)
assert.Len(t, users, 0)
}

View File

@@ -146,9 +146,8 @@ func catFileBatch(ctx context.Context, repoPath string) (WriteCloserError, *bufi
}
// ReadBatchLine reads the header line from cat-file --batch
// We expect:
// <sha> SP <type> SP <size> LF
// sha is a hex encoded here
// We expect: <oid> SP <type> SP <size> LF
// then leaving the rest of the stream "<contents> LF" to be read
func ReadBatchLine(rd *bufio.Reader) (sha []byte, typ string, size int64, err error) {
typ, err = rd.ReadString('\n')
if err != nil {

View File

@@ -377,31 +377,43 @@ func (c *Commit) GetSubModules() (*ObjectCache, error) {
}
defer rd.Close()
return configParseSubModules(rd)
}
func configParseSubModules(rd io.Reader) (*ObjectCache, error) {
scanner := bufio.NewScanner(rd)
c.submoduleCache = newObjectCache()
var ismodule bool
var path string
submoduleCache := newObjectCache()
var subModule *SubModule
for scanner.Scan() {
if strings.HasPrefix(scanner.Text(), "[submodule") {
ismodule = true
line := strings.TrimSpace(scanner.Text())
if strings.HasPrefix(line, "[") {
if subModule != nil {
submoduleCache.Set(subModule.Name, subModule)
subModule = nil
}
if strings.HasPrefix(line, "[submodule") {
subModule = &SubModule{}
}
continue
}
if ismodule {
fields := strings.Split(scanner.Text(), "=")
if subModule != nil {
fields := strings.Split(line, "=")
k := strings.TrimSpace(fields[0])
if k == "path" {
path = strings.TrimSpace(fields[1])
subModule.Name = strings.TrimSpace(fields[1])
} else if k == "url" {
c.submoduleCache.Set(path, &SubModule{path, strings.TrimSpace(fields[1])})
ismodule = false
subModule.URL = strings.TrimSpace(fields[1])
}
}
}
if err = scanner.Err(); err != nil {
if subModule != nil {
submoduleCache.Set(subModule.Name, subModule)
}
if err := scanner.Err(); err != nil {
return nil, fmt.Errorf("GetSubModules scan: %w", err)
}
return c.submoduleCache, nil
return submoduleCache, nil
}
// GetSubModule get the sub module according entryname

View File

@@ -135,7 +135,7 @@ author KN4CK3R <admin@oldschoolhack.me> 1711702962 +0100
committer KN4CK3R <admin@oldschoolhack.me> 1711702962 +0100
encoding ISO-8859-1
gpgsig -----BEGIN PGP SIGNATURE-----
<SPACE>
iQGzBAABCgAdFiEE9HRrbqvYxPT8PXbefPSEkrowAa8FAmYGg7IACgkQfPSEkrow
Aa9olwv+P0HhtCM6CRvlUmPaqswRsDPNR4i66xyXGiSxdI9V5oJL7HLiQIM7KrFR
gizKa2COiGtugv8fE+TKqXKaJx6uJUJEjaBd8E9Af9PrAzjWj+A84lU6/PgPS8hq
@@ -150,7 +150,7 @@ gpgsig -----BEGIN PGP SIGNATURE-----
-----END PGP SIGNATURE-----
ISO-8859-1`
commitString = strings.ReplaceAll(commitString, "<SPACE>", " ")
sha := &Sha1Hash{0xfe, 0xaf, 0x4b, 0xa6, 0xbc, 0x63, 0x5f, 0xec, 0x44, 0x2f, 0x46, 0xdd, 0xd4, 0x51, 0x24, 0x16, 0xec, 0x43, 0xc2, 0xc2}
gitRepo, err := openRepositoryWithDefaultContext(filepath.Join(testReposDir, "repo1_bare"))
assert.NoError(t, err)
@@ -362,3 +362,41 @@ func Test_GetCommitBranchStart(t *testing.T) {
assert.NotEmpty(t, startCommitID)
assert.EqualValues(t, "9c9aef8dd84e02bc7ec12641deb4c930a7c30185", startCommitID)
}
func TestConfigSubModule(t *testing.T) {
input := `
[core]
path = test
[submodule "submodule1"]
path = path1
url = https://gitea.io/foo/foo
#branch = b1
[other1]
branch = master
[submodule "submodule2"]
path = path2
url = https://gitea.io/bar/bar
branch = b2
[other2]
branch = main
[submodule "submodule3"]
path = path3
url = https://gitea.io/xxx/xxx
`
subModules, err := configParseSubModules(strings.NewReader(input))
assert.NoError(t, err)
assert.Len(t, subModules.cache, 3)
sm1, _ := subModules.Get("path1")
assert.Equal(t, &SubModule{Name: "path1", URL: "https://gitea.io/foo/foo"}, sm1)
sm2, _ := subModules.Get("path2")
assert.Equal(t, &SubModule{Name: "path2", URL: "https://gitea.io/bar/bar"}, sm2)
sm3, _ := subModules.Get("path3")
assert.Equal(t, &SubModule{Name: "path3", URL: "https://gitea.io/xxx/xxx"}, sm3)
}

View File

@@ -14,9 +14,16 @@ import (
"github.com/go-git/go-git/v5/plumbing/object"
)
// GetRefCommitID returns the last commit ID string of given reference (branch or tag).
// GetRefCommitID returns the last commit ID string of given reference.
func (repo *Repository) GetRefCommitID(name string) (string, error) {
ref, err := repo.gogitRepo.Reference(plumbing.ReferenceName(name), true)
if plumbing.IsHash(name) {
return name, nil
}
refName := plumbing.ReferenceName(name)
if err := refName.Validate(); err != nil {
return "", err
}
ref, err := repo.gogitRepo.Reference(refName, true)
if err != nil {
if err == plumbing.ErrReferenceNotFound {
return "", ErrNotExist{

View File

@@ -101,3 +101,28 @@ func TestRepository_CommitsBetweenIDs(t *testing.T) {
assert.Len(t, commits, c.ExpectedCommits, "case %d", i)
}
}
func TestGetRefCommitID(t *testing.T) {
bareRepo1Path := filepath.Join(testReposDir, "repo1_bare")
bareRepo1, err := openRepositoryWithDefaultContext(bareRepo1Path)
assert.NoError(t, err)
defer bareRepo1.Close()
// these test case are specific to the repo1_bare test repo
testCases := []struct {
Ref string
ExpectedCommitID string
}{
{RefNameFromBranch("master").String(), "ce064814f4a0d337b333e646ece456cd39fab612"},
{RefNameFromBranch("branch1").String(), "2839944139e0de9737a044f78b0e4b40d989a9e3"},
{RefNameFromTag("test").String(), "3ad28a9149a2864384548f3d17ed7f38014c9e8a"},
{"ce064814f4a0d337b333e646ece456cd39fab612", "ce064814f4a0d337b333e646ece456cd39fab612"},
}
for _, testCase := range testCases {
commitID, err := bareRepo1.GetRefCommitID(testCase.Ref)
if assert.NoError(t, err) {
assert.Equal(t, testCase.ExpectedCommitID, commitID)
}
}
}

View File

@@ -50,25 +50,35 @@ func (repo *Repository) readTreeToIndex(id ObjectID, indexFilename ...string) er
}
// ReadTreeToTemporaryIndex reads a treeish to a temporary index file
func (repo *Repository) ReadTreeToTemporaryIndex(treeish string) (filename, tmpDir string, cancel context.CancelFunc, err error) {
tmpDir, err = os.MkdirTemp("", "index")
if err != nil {
return filename, tmpDir, cancel, err
}
func (repo *Repository) ReadTreeToTemporaryIndex(treeish string) (tmpIndexFilename, tmpDir string, cancel context.CancelFunc, err error) {
defer func() {
// if error happens and there is a cancel function, do clean up
if err != nil && cancel != nil {
cancel()
cancel = nil
}
}()
filename = filepath.Join(tmpDir, ".tmp-index")
cancel = func() {
err := util.RemoveAll(tmpDir)
if err != nil {
log.Error("failed to remove tmp index file: %v", err)
removeDirFn := func(dir string) func() { // it can't use the return value "tmpDir" directly because it is empty when error occurs
return func() {
if err := util.RemoveAll(dir); err != nil {
log.Error("failed to remove tmp index dir: %v", err)
}
}
}
err = repo.ReadTreeToIndex(treeish, filename)
tmpDir, err = os.MkdirTemp("", "index")
if err != nil {
defer cancel()
return "", "", func() {}, err
return "", "", nil, err
}
return filename, tmpDir, cancel, err
tmpIndexFilename = filepath.Join(tmpDir, ".tmp-index")
cancel = removeDirFn(tmpDir)
err = repo.ReadTreeToIndex(treeish, tmpIndexFilename)
if err != nil {
return "", "", cancel, err
}
return tmpIndexFilename, tmpDir, cancel, err
}
// EmptyIndex empties the index

View File

@@ -211,6 +211,7 @@ func createRequest(ctx context.Context, method, url string, headers map[string]s
req.Header.Set(key, value)
}
req.Header.Set("Accept", AcceptHeader)
req.Header.Set("User-Agent", UserAgentHeader)
return req, nil
}

View File

@@ -15,7 +15,8 @@ const (
// MediaType contains the media type for LFS server requests
MediaType = "application/vnd.git-lfs+json"
// Some LFS servers offer content with other types, so fallback to '*/*' if application/vnd.git-lfs+json cannot be served
AcceptHeader = "application/vnd.git-lfs+json;q=0.9, */*;q=0.8"
AcceptHeader = "application/vnd.git-lfs+json;q=0.9, */*;q=0.8"
UserAgentHeader = "git-lfs"
)
// BatchRequest contains multiple requests processed in one batch operation.

View File

@@ -1,19 +0,0 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package log
import "unsafe"
//go:linkname runtime_getProfLabel runtime/pprof.runtime_getProfLabel
func runtime_getProfLabel() unsafe.Pointer //nolint
type labelMap map[string]string
func getGoroutineLabels() map[string]string {
l := (*labelMap)(runtime_getProfLabel())
if l == nil {
return nil
}
return *l
}

View File

@@ -1,33 +0,0 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package log
import (
"context"
"runtime/pprof"
"testing"
"github.com/stretchr/testify/assert"
)
func Test_getGoroutineLabels(t *testing.T) {
pprof.Do(context.Background(), pprof.Labels(), func(ctx context.Context) {
currentLabels := getGoroutineLabels()
pprof.ForLabels(ctx, func(key, value string) bool {
assert.EqualValues(t, value, currentLabels[key])
return true
})
pprof.Do(ctx, pprof.Labels("Test_getGoroutineLabels", "Test_getGoroutineLabels_child1"), func(ctx context.Context) {
currentLabels := getGoroutineLabels()
pprof.ForLabels(ctx, func(key, value string) bool {
assert.EqualValues(t, value, currentLabels[key])
return true
})
if assert.NotNil(t, currentLabels) {
assert.EqualValues(t, "Test_getGoroutineLabels_child1", currentLabels["Test_getGoroutineLabels"])
}
})
})
}

View File

@@ -200,10 +200,7 @@ func (l *LoggerImpl) Log(skip int, level Level, format string, logArgs ...any) {
event.Stacktrace = Stack(skip + 1)
}
labels := getGoroutineLabels()
if labels != nil {
event.GoroutinePid = labels["pid"]
}
event.GoroutinePid = "no-gopid"
// get a simple text message without color
msgArgs := make([]any, len(logArgs))

View File

@@ -39,7 +39,7 @@ const (
// SanitizerRules implements markup.Renderer
func (Renderer) SanitizerRules() []setting.MarkupSanitizerRule {
return []setting.MarkupSanitizerRule{
{Element: "div", AllowAttr: "class", Regexp: regexp.MustCompile(playerClassName)},
{Element: "div", AllowAttr: "class", Regexp: regexp.MustCompile("^" + playerClassName + "$")},
{Element: "div", AllowAttr: playerSrcAttr},
}
}

View File

@@ -37,9 +37,9 @@ func (Renderer) Extensions() []string {
// SanitizerRules implements markup.Renderer
func (Renderer) SanitizerRules() []setting.MarkupSanitizerRule {
return []setting.MarkupSanitizerRule{
{Element: "table", AllowAttr: "class", Regexp: regexp.MustCompile(`data-table`)},
{Element: "th", AllowAttr: "class", Regexp: regexp.MustCompile(`line-num`)},
{Element: "td", AllowAttr: "class", Regexp: regexp.MustCompile(`line-num`)},
{Element: "table", AllowAttr: "class", Regexp: regexp.MustCompile(`^data-table$`)},
{Element: "th", AllowAttr: "class", Regexp: regexp.MustCompile(`^line-num$`)},
{Element: "td", AllowAttr: "class", Regexp: regexp.MustCompile(`^line-num$`)},
}
}

View File

@@ -67,10 +67,10 @@ func (st *Sanitizer) createDefaultPolicy() *bluemonday.Policy {
}
// Allow classes for anchors
policy.AllowAttrs("class").Matching(regexp.MustCompile(`ref-issue( ref-external-issue)?`)).OnElements("a")
policy.AllowAttrs("class").Matching(regexp.MustCompile(`^ref-issue( ref-external-issue)?$`)).OnElements("a")
// Allow classes for task lists
policy.AllowAttrs("class").Matching(regexp.MustCompile(`task-list-item`)).OnElements("li")
policy.AllowAttrs("class").Matching(regexp.MustCompile(`^task-list-item$`)).OnElements("li")
// Allow classes for org mode list item status.
policy.AllowAttrs("class").Matching(regexp.MustCompile(`^(unchecked|checked|indeterminate)$`)).OnElements("li")
@@ -79,7 +79,7 @@ func (st *Sanitizer) createDefaultPolicy() *bluemonday.Policy {
policy.AllowAttrs("class").Matching(regexp.MustCompile(`^icon(\s+[\p{L}\p{N}_-]+)+$`)).OnElements("i")
// Allow classes for emojis
policy.AllowAttrs("class").Matching(regexp.MustCompile(`emoji`)).OnElements("img")
policy.AllowAttrs("class").Matching(regexp.MustCompile(`^emoji$`)).OnElements("img")
// Allow icons, emojis, chroma syntax and keyword markup on span
policy.AllowAttrs("class").Matching(regexp.MustCompile(`^((icon(\s+[\p{L}\p{N}_-]+)+)|(emoji)|(language-math display)|(language-math inline))$|^([a-z][a-z0-9]{0,2})$|^` + keywordClass + `$`)).OnElements("span")

View File

@@ -136,8 +136,16 @@ func parsePackage(r io.Reader) (*Package, error) {
dependencies := make([]*Dependency, 0, len(meta.Deps))
for _, dep := range meta.Deps {
// https://doc.rust-lang.org/cargo/reference/registry-web-api.html#publish
// It is a string of the new package name if the dependency is renamed, otherwise empty
name := dep.ExplicitNameInToml
pkg := &dep.Name
if name == "" {
name = dep.Name
pkg = nil
}
dependencies = append(dependencies, &Dependency{
Name: dep.Name,
Name: name,
Req: dep.VersionReq,
Features: dep.Features,
Optional: dep.Optional,
@@ -145,6 +153,7 @@ func parsePackage(r io.Reader) (*Package, error) {
Target: dep.Target,
Kind: dep.Kind,
Registry: dep.Registry,
Package: pkg,
})
}

View File

@@ -13,16 +13,16 @@ import (
"github.com/stretchr/testify/assert"
)
const (
description = "Package Description"
author = "KN4CK3R"
homepage = "https://gitea.io/"
license = "MIT"
)
func TestParsePackage(t *testing.T) {
createPackage := func(name, version string) io.Reader {
metadata := `{
const (
description = "Package Description"
author = "KN4CK3R"
homepage = "https://gitea.io/"
license = "MIT"
payload = "gitea test dummy payload" // a fake payload for test only
)
makeDefaultPackageMeta := func(name, version string) string {
return `{
"name":"` + name + `",
"vers":"` + version + `",
"description":"` + description + `",
@@ -36,18 +36,19 @@ func TestParsePackage(t *testing.T) {
"homepage":"` + homepage + `",
"license":"` + license + `"
}`
}
createPackage := func(metadata string) io.Reader {
var buf bytes.Buffer
binary.Write(&buf, binary.LittleEndian, uint32(len(metadata)))
buf.WriteString(metadata)
binary.Write(&buf, binary.LittleEndian, uint32(4))
buf.WriteString("test")
binary.Write(&buf, binary.LittleEndian, uint32(len(payload)))
buf.WriteString(payload)
return &buf
}
t.Run("InvalidName", func(t *testing.T) {
for _, name := range []string{"", "0test", "-test", "_test", strings.Repeat("a", 65)} {
data := createPackage(name, "1.0.0")
data := createPackage(makeDefaultPackageMeta(name, "1.0.0"))
cp, err := ParsePackage(data)
assert.Nil(t, cp)
@@ -57,7 +58,7 @@ func TestParsePackage(t *testing.T) {
t.Run("InvalidVersion", func(t *testing.T) {
for _, version := range []string{"", "1.", "-1.0", "1.0.0/1"} {
data := createPackage("test", version)
data := createPackage(makeDefaultPackageMeta("test", version))
cp, err := ParsePackage(data)
assert.Nil(t, cp)
@@ -66,7 +67,7 @@ func TestParsePackage(t *testing.T) {
})
t.Run("Valid", func(t *testing.T) {
data := createPackage("test", "1.0.0")
data := createPackage(makeDefaultPackageMeta("test", "1.0.0"))
cp, err := ParsePackage(data)
assert.NotNil(t, cp)
@@ -78,9 +79,34 @@ func TestParsePackage(t *testing.T) {
assert.Equal(t, []string{author}, cp.Metadata.Authors)
assert.Len(t, cp.Metadata.Dependencies, 1)
assert.Equal(t, "dep", cp.Metadata.Dependencies[0].Name)
assert.Nil(t, cp.Metadata.Dependencies[0].Package)
assert.Equal(t, homepage, cp.Metadata.ProjectURL)
assert.Equal(t, license, cp.Metadata.License)
content, _ := io.ReadAll(cp.Content)
assert.Equal(t, "test", string(content))
assert.Equal(t, payload, string(content))
})
t.Run("Renamed", func(t *testing.T) {
data := createPackage(`{
"name":"test-pkg",
"vers":"1.0",
"description":"test-desc",
"authors": ["test-author"],
"deps":[
{
"name":"dep-renamed",
"explicit_name_in_toml":"dep-explicit",
"version_req":"1.0"
}
],
"homepage":"https://gitea.io/",
"license":"MIT"
}`)
cp, err := ParsePackage(data)
assert.NoError(t, err)
assert.Equal(t, "test-pkg", cp.Name)
assert.Equal(t, "https://gitea.io/", cp.Metadata.ProjectURL)
assert.Equal(t, "dep-explicit", cp.Metadata.Dependencies[0].Name)
assert.Equal(t, "dep-renamed", *cp.Metadata.Dependencies[0].Package)
})
}

View File

@@ -37,8 +37,8 @@ func (s *ContentStore) ShouldServeDirect() bool {
return setting.Packages.Storage.MinioConfig.ServeDirect
}
func (s *ContentStore) GetServeDirectURL(key BlobHash256Key, filename string) (*url.URL, error) {
return s.store.URL(KeyToRelativePath(key), filename)
func (s *ContentStore) GetServeDirectURL(key BlobHash256Key, filename string, reqParams url.Values) (*url.URL, error) {
return s.store.URL(KeyToRelativePath(key), filename, reqParams)
}
// FIXME: Workaround to be removed in v1.20

View File

@@ -43,7 +43,7 @@ Ensure you are running in the correct environment or set the correct configurati
req := httplib.NewRequest(url, method).
SetContext(ctx).
Header("X-Real-IP", getClientIP()).
Header("Authorization", fmt.Sprintf("Bearer %s", setting.InternalToken)).
Header("X-Gitea-Internal-Auth", fmt.Sprintf("Bearer %s", setting.InternalToken)).
SetTLSClientConfig(&tls.Config{
InsecureSkipVerify: true,
ServerName: setting.Domain,

View File

@@ -340,9 +340,10 @@ func pullMirrorReleaseSync(ctx context.Context, repo *repo_model.Repository, git
for _, tag := range updates {
if _, err := db.GetEngine(ctx).Where("repo_id = ? AND lower_tag_name = ?", repo.ID, strings.ToLower(tag.Name)).
Cols("sha1").
Cols("sha1", "created_unix").
Update(&repo_model.Release{
Sha1: tag.Object.String(),
Sha1: tag.Object.String(),
CreatedUnix: timeutil.TimeStamp(tag.Tagger.When.Unix()),
}); err != nil {
return fmt.Errorf("unable to update tag %s for pull-mirror Repo[%d:%s/%s]: %w", tag.Name, repo.ID, repo.OwnerName, repo.Name, err)
}

View File

@@ -13,10 +13,12 @@ import (
"errors"
"fmt"
"io"
"maps"
"net"
"os"
"os/exec"
"path/filepath"
"reflect"
"strconv"
"strings"
"sync"
@@ -33,9 +35,22 @@ import (
gossh "golang.org/x/crypto/ssh"
)
type contextKey string
// The ssh auth overall works like this:
// NewServerConn:
// serverHandshake+serverAuthenticate:
// PublicKeyCallback:
// PublicKeyHandler (our code):
// reset(ctx.Permissions) and set ctx.Permissions.giteaKeyID = keyID
// pubKey.Verify
// return ctx.Permissions // only reaches here, the pub key is really authenticated
// set conn.Permissions from serverAuthenticate
// sessionHandler(conn)
//
// Then sessionHandler should only use the "verified keyID" from the original ssh conn, but not the ctx one.
// Otherwise, if a user provides 2 keys A (a correct one) and B (public key matches but no private key),
// then only A succeeds to authenticate, sessionHandler will see B's keyID
const giteaKeyID = contextKey("gitea-key-id")
const giteaPermissionExtensionKeyID = "gitea-perm-ext-key-id"
func getExitStatusFromError(err error) int {
if err == nil {
@@ -61,8 +76,32 @@ func getExitStatusFromError(err error) int {
return waitStatus.ExitStatus()
}
// sessionPartial is the private struct from "gliderlabs/ssh/session.go"
// We need to read the original "conn" field from "ssh.Session interface" which contains the "*session pointer"
// https://github.com/gliderlabs/ssh/blob/d137aad99cd6f2d9495bfd98c755bec4e5dffb8c/session.go#L109-L113
// If upstream fixes the problem and/or changes the struct, we need to follow.
// If the struct mismatches, the builtin ssh server will fail during integration tests.
type sessionPartial struct {
sync.Mutex
gossh.Channel
conn *gossh.ServerConn
}
func ptr[T any](intf any) *T {
// https://pkg.go.dev/unsafe#Pointer
// (1) Conversion of a *T1 to Pointer to *T2.
// Provided that T2 is no larger than T1 and that the two share an equivalent memory layout,
// this conversion allows reinterpreting data of one type as data of another type.
v := reflect.ValueOf(intf)
p := v.UnsafePointer()
return (*T)(p)
}
func sessionHandler(session ssh.Session) {
keyID := fmt.Sprintf("%d", session.Context().Value(giteaKeyID).(int64))
// here can't use session.Permissions() because it only uses the value from ctx, which might not be the authenticated one.
// so we must use the original ssh conn, which always contains the correct (verified) keyID.
sshConn := ptr[sessionPartial](session)
keyID := sshConn.conn.Permissions.Extensions[giteaPermissionExtensionKeyID]
command := session.RawCommand()
@@ -164,6 +203,23 @@ func sessionHandler(session ssh.Session) {
}
func publicKeyHandler(ctx ssh.Context, key ssh.PublicKey) bool {
// The publicKeyHandler (PublicKeyCallback) only helps to provide the candidate keys to authenticate,
// It does NOT really verify here, so we could only record the related information here.
// After authentication (Verify), the "Permissions" will be assigned to the ssh conn,
// then we can use it in the "session handler"
// first, reset the ctx permissions (just like https://github.com/gliderlabs/ssh/pull/243 does)
// it shouldn't be reused across different ssh conn (sessions), each pub key should have its own "Permissions"
oldCtxPerm := ctx.Permissions().Permissions
ctx.Permissions().Permissions = &gossh.Permissions{}
ctx.Permissions().Permissions.CriticalOptions = maps.Clone(oldCtxPerm.CriticalOptions)
setPermExt := func(keyID int64) {
ctx.Permissions().Permissions.Extensions = map[string]string{
giteaPermissionExtensionKeyID: fmt.Sprint(keyID),
}
}
if log.IsDebug() { // <- FingerprintSHA256 is kinda expensive so only calculate it if necessary
log.Debug("Handle Public Key: Fingerprint: %s from %s", gossh.FingerprintSHA256(key), ctx.RemoteAddr())
}
@@ -238,7 +294,7 @@ func publicKeyHandler(ctx ssh.Context, key ssh.PublicKey) bool {
if log.IsDebug() { // <- FingerprintSHA256 is kinda expensive so only calculate it if necessary
log.Debug("Successfully authenticated: %s Certificate Fingerprint: %s Principal: %s", ctx.RemoteAddr(), gossh.FingerprintSHA256(key), principal)
}
ctx.SetValue(giteaKeyID, pkey.ID)
setPermExt(pkey.ID)
return true
}
@@ -266,7 +322,7 @@ func publicKeyHandler(ctx ssh.Context, key ssh.PublicKey) bool {
if log.IsDebug() { // <- FingerprintSHA256 is kinda expensive so only calculate it if necessary
log.Debug("Successfully authenticated: %s Public Key Fingerprint: %s", ctx.RemoteAddr(), gossh.FingerprintSHA256(key))
}
ctx.SetValue(giteaKeyID, pkey.ID)
setPermExt(pkey.ID)
return true
}

View File

@@ -30,7 +30,7 @@ func (s discardStorage) Delete(_ string) error {
return fmt.Errorf("%s", s)
}
func (s discardStorage) URL(_, _ string) (*url.URL, error) {
func (s discardStorage) URL(_, _ string, _ url.Values) (*url.URL, error) {
return nil, fmt.Errorf("%s", s)
}

View File

@@ -37,7 +37,7 @@ func Test_discardStorage(t *testing.T) {
assert.Error(t, err, string(tt))
}
{
got, err := tt.URL("path", "name")
got, err := tt.URL("path", "name", nil)
assert.Nil(t, got)
assert.Errorf(t, err, string(tt))
}

View File

@@ -114,7 +114,7 @@ func (l *LocalStorage) Delete(path string) error {
}
// URL gets the redirect URL to a file
func (l *LocalStorage) URL(path, name string) (*url.URL, error) {
func (l *LocalStorage) URL(path, name string, reqParams url.Values) (*url.URL, error) {
return nil, ErrURLNotSupported
}

View File

@@ -235,8 +235,12 @@ func (m *MinioStorage) Delete(path string) error {
}
// URL gets the redirect URL to a file. The presigned link is valid for 5 minutes.
func (m *MinioStorage) URL(path, name string) (*url.URL, error) {
reqParams := make(url.Values)
func (m *MinioStorage) URL(path, name string, serveDirectReqParams url.Values) (*url.URL, error) {
// copy serveDirectReqParams
reqParams, err := url.ParseQuery(serveDirectReqParams.Encode())
if err != nil {
return nil, err
}
// TODO it may be good to embed images with 'inline' like ServeData does, but we don't want to have to read the file, do we?
reqParams.Set("response-content-disposition", "attachment; filename=\""+quoteEscaper.Replace(name)+"\"")
u, err := m.client.PresignedGetObject(m.ctx, m.bucket, m.buildMinioPath(path), 5*time.Minute, reqParams)

View File

@@ -63,7 +63,7 @@ type ObjectStorage interface {
Save(path string, r io.Reader, size int64) (int64, error)
Stat(path string) (os.FileInfo, error)
Delete(path string) error
URL(path, name string) (*url.URL, error)
URL(path, name string, reqParams url.Values) (*url.URL, error)
IterateObjects(path string, iterator func(path string, obj Object) error) error
}

View File

@@ -5,6 +5,7 @@ package web
import (
"net/http"
"reflect"
"strings"
"code.gitea.io/gitea/modules/web/middleware"
@@ -80,15 +81,23 @@ func (r *Route) getPattern(pattern string) string {
return strings.TrimSuffix(newPattern, "/")
}
func isNilOrFuncNil(v any) bool {
if v == nil {
return true
}
r := reflect.ValueOf(v)
return r.Kind() == reflect.Func && r.IsNil()
}
func (r *Route) wrapMiddlewareAndHandler(h []any) ([]func(http.Handler) http.Handler, http.HandlerFunc) {
handlerProviders := make([]func(http.Handler) http.Handler, 0, len(r.curMiddlewares)+len(h)+1)
for _, m := range r.curMiddlewares {
if m != nil {
if !isNilOrFuncNil(m) {
handlerProviders = append(handlerProviders, toHandlerProvider(m))
}
}
for _, m := range h {
if h != nil {
if !isNilOrFuncNil(m) {
handlerProviders = append(handlerProviders, toHandlerProvider(m))
}
}

View File

@@ -1434,8 +1434,6 @@ issues.new.no_items = No items
issues.new.milestone = Milestone
issues.new.no_milestone = No Milestone
issues.new.clear_milestone = Clear milestone
issues.new.open_milestone = Open Milestones
issues.new.closed_milestone = Closed Milestones
issues.new.assignees = Assignees
issues.new.clear_assignees = Clear assignees
issues.new.no_assignees = No Assignees

8
package-lock.json generated
View File

@@ -10,7 +10,7 @@
"@citation-js/plugin-csl": "0.7.11",
"@citation-js/plugin-software-formats": "0.6.1",
"@github/markdown-toolbar-element": "2.2.3",
"@github/relative-time-element": "4.4.2",
"@github/relative-time-element": "4.4.4",
"@github/text-expander-element": "2.6.1",
"@mcaptcha/vanilla-glue": "0.1.0-alpha-3",
"@primer/octicons": "19.9.0",
@@ -1020,9 +1020,9 @@
"integrity": "sha512-AlquKGee+IWiAMYVB0xyHFZRMnu4n3X4HTvJHu79GiVJ1ojTukCWyxMlF5NMsecoLcBKsuBhx3QPv2vkE/zQ0A=="
},
"node_modules/@github/relative-time-element": {
"version": "4.4.2",
"resolved": "https://registry.npmjs.org/@github/relative-time-element/-/relative-time-element-4.4.2.tgz",
"integrity": "sha512-wTXunu3hmuGljA5CHaaoUIKV0oI35wno0FKJl2yqKplTRnsCA5bPNj4bDeVIubkuskql6jwionWLlGM1Y6QLaw==",
"version": "4.4.4",
"resolved": "https://registry.npmjs.org/@github/relative-time-element/-/relative-time-element-4.4.4.tgz",
"integrity": "sha512-Oi8uOL8O+ZWLD7dHRWCkm2cudcTYtB3VyOYf9BtzCgDGm+OKomyOREtItNMtWl1dxvec62BTKErq36uy+RYxQg==",
"license": "MIT"
},
"node_modules/@github/text-expander-element": {

View File

@@ -9,7 +9,7 @@
"@citation-js/plugin-csl": "0.7.11",
"@citation-js/plugin-software-formats": "0.6.1",
"@github/markdown-toolbar-element": "2.2.3",
"@github/relative-time-element": "4.4.2",
"@github/relative-time-element": "4.4.4",
"@github/text-expander-element": "2.6.1",
"@mcaptcha/vanilla-glue": "0.1.0-alpha-3",
"@primer/octicons": "19.9.0",

View File

@@ -429,7 +429,7 @@ func (ar artifactRoutes) getDownloadArtifactURL(ctx *ArtifactContext) {
for _, artifact := range artifacts {
var downloadURL string
if setting.Actions.ArtifactStorage.MinioConfig.ServeDirect {
u, err := ar.fs.URL(artifact.StoragePath, artifact.ArtifactName)
u, err := ar.fs.URL(artifact.StoragePath, artifact.ArtifactName, nil)
if err != nil && !errors.Is(err, storage.ErrURLNotSupported) {
log.Error("Error getting serve direct url: %v", err)
}

View File

@@ -123,6 +123,54 @@ func listChunksByRunID(st storage.ObjectStorage, runID int64) (map[int64][]*chun
return chunksMap, nil
}
func listChunksByRunIDV4(st storage.ObjectStorage, runID, artifactID int64, blist *BlockList) ([]*chunkFileItem, error) {
storageDir := fmt.Sprintf("tmpv4%d", runID)
var chunks []*chunkFileItem
chunkMap := map[string]*chunkFileItem{}
dummy := &chunkFileItem{}
for _, name := range blist.Latest {
chunkMap[name] = dummy
}
if err := st.IterateObjects(storageDir, func(fpath string, obj storage.Object) error {
baseName := filepath.Base(fpath)
if !strings.HasPrefix(baseName, "block-") {
return nil
}
// when read chunks from storage, it only contains storage dir and basename,
// no matter the subdirectory setting in storage config
item := chunkFileItem{Path: storageDir + "/" + baseName, ArtifactID: artifactID}
var size int64
var b64chunkName string
if _, err := fmt.Sscanf(baseName, "block-%d-%d-%s", &item.RunID, &size, &b64chunkName); err != nil {
return fmt.Errorf("parse content range error: %v", err)
}
rchunkName, err := base64.URLEncoding.DecodeString(b64chunkName)
if err != nil {
return fmt.Errorf("failed to parse chunkName: %v", err)
}
chunkName := string(rchunkName)
item.End = item.Start + size - 1
if _, ok := chunkMap[chunkName]; ok {
chunkMap[chunkName] = &item
}
return nil
}); err != nil {
return nil, err
}
for i, name := range blist.Latest {
chunk, ok := chunkMap[name]
if !ok || chunk.Path == "" {
return nil, fmt.Errorf("missing Chunk (%d/%d): %s", i, len(blist.Latest), name)
}
chunks = append(chunks, chunk)
if i > 0 {
chunk.Start = chunkMap[blist.Latest[i-1]].End + 1
chunk.End += chunk.Start
}
}
return chunks, nil
}
func mergeChunksForRun(ctx *ArtifactContext, st storage.ObjectStorage, runID int64, artifactName string) error {
// read all db artifacts by name
artifacts, err := db.Find[actions.ActionArtifact](ctx, actions.FindArtifactsOptions{
@@ -230,7 +278,7 @@ func mergeChunksForArtifact(ctx *ArtifactContext, chunks []*chunkFileItem, st st
rawChecksum := hash.Sum(nil)
actualChecksum := hex.EncodeToString(rawChecksum)
if !strings.HasSuffix(checksum, actualChecksum) {
return fmt.Errorf("update artifact error checksum is invalid")
return fmt.Errorf("update artifact error checksum is invalid %v vs %v", checksum, actualChecksum)
}
}

View File

@@ -24,8 +24,15 @@ package actions
// PUT: http://localhost:3000/twirp/github.actions.results.api.v1.ArtifactService/UploadArtifact?sig=mO7y35r4GyjN7fwg0DTv3-Fv1NDXD84KLEgLpoPOtDI=&expires=2024-01-23+21%3A48%3A37.20833956+%2B0100+CET&artifactName=test&taskID=75&comp=block
// 1.3. Continue Upload Zip Content to Blobstorage (unauthenticated request), repeat until everything is uploaded
// PUT: http://localhost:3000/twirp/github.actions.results.api.v1.ArtifactService/UploadArtifact?sig=mO7y35r4GyjN7fwg0DTv3-Fv1NDXD84KLEgLpoPOtDI=&expires=2024-01-23+21%3A48%3A37.20833956+%2B0100+CET&artifactName=test&taskID=75&comp=appendBlock
// 1.4. Unknown xml payload to Blobstorage (unauthenticated request), ignored for now
// 1.4. BlockList xml payload to Blobstorage (unauthenticated request)
// Files of about 800MB are parallel in parallel and / or out of order, this file is needed to enshure the correct order
// PUT: http://localhost:3000/twirp/github.actions.results.api.v1.ArtifactService/UploadArtifact?sig=mO7y35r4GyjN7fwg0DTv3-Fv1NDXD84KLEgLpoPOtDI=&expires=2024-01-23+21%3A48%3A37.20833956+%2B0100+CET&artifactName=test&taskID=75&comp=blockList
// Request
// <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
// <BlockList>
// <Latest>blockId1</Latest>
// <Latest>blockId2</Latest>
// </BlockList>
// 1.5. FinalizeArtifact
// Post: /twirp/github.actions.results.api.v1.ArtifactService/FinalizeArtifact
// Request
@@ -82,6 +89,7 @@ import (
"crypto/hmac"
"crypto/sha256"
"encoding/base64"
"encoding/xml"
"fmt"
"io"
"net/http"
@@ -152,31 +160,34 @@ func ArtifactsV4Routes(prefix string) *web.Route {
return m
}
func (r artifactV4Routes) buildSignature(endp, expires, artifactName string, taskID int64) []byte {
func (r artifactV4Routes) buildSignature(endp, expires, artifactName string, taskID, artifactID int64) []byte {
mac := hmac.New(sha256.New, setting.GetGeneralTokenSigningSecret())
mac.Write([]byte(endp))
mac.Write([]byte(expires))
mac.Write([]byte(artifactName))
mac.Write([]byte(fmt.Sprint(taskID)))
mac.Write([]byte(fmt.Sprint(artifactID)))
return mac.Sum(nil)
}
func (r artifactV4Routes) buildArtifactURL(ctx *ArtifactContext, endp, artifactName string, taskID int64) string {
func (r artifactV4Routes) buildArtifactURL(ctx *ArtifactContext, endp, artifactName string, taskID, artifactID int64) string {
expires := time.Now().Add(60 * time.Minute).Format("2006-01-02 15:04:05.999999999 -0700 MST")
uploadURL := strings.TrimSuffix(httplib.GuessCurrentAppURL(ctx), "/") + strings.TrimSuffix(r.prefix, "/") +
"/" + endp + "?sig=" + base64.URLEncoding.EncodeToString(r.buildSignature(endp, expires, artifactName, taskID)) + "&expires=" + url.QueryEscape(expires) + "&artifactName=" + url.QueryEscape(artifactName) + "&taskID=" + fmt.Sprint(taskID)
"/" + endp + "?sig=" + base64.URLEncoding.EncodeToString(r.buildSignature(endp, expires, artifactName, taskID, artifactID)) + "&expires=" + url.QueryEscape(expires) + "&artifactName=" + url.QueryEscape(artifactName) + "&taskID=" + fmt.Sprint(taskID) + "&artifactID=" + fmt.Sprint(artifactID)
return uploadURL
}
func (r artifactV4Routes) verifySignature(ctx *ArtifactContext, endp string) (*actions.ActionTask, string, bool) {
rawTaskID := ctx.Req.URL.Query().Get("taskID")
rawArtifactID := ctx.Req.URL.Query().Get("artifactID")
sig := ctx.Req.URL.Query().Get("sig")
expires := ctx.Req.URL.Query().Get("expires")
artifactName := ctx.Req.URL.Query().Get("artifactName")
dsig, _ := base64.URLEncoding.DecodeString(sig)
taskID, _ := strconv.ParseInt(rawTaskID, 10, 64)
artifactID, _ := strconv.ParseInt(rawArtifactID, 10, 64)
expecedsig := r.buildSignature(endp, expires, artifactName, taskID)
expecedsig := r.buildSignature(endp, expires, artifactName, taskID, artifactID)
if !hmac.Equal(dsig, expecedsig) {
log.Error("Error unauthorized")
ctx.Error(http.StatusUnauthorized, "Error unauthorized")
@@ -271,6 +282,8 @@ func (r *artifactV4Routes) createArtifact(ctx *ArtifactContext) {
return
}
artifact.ContentEncoding = ArtifactV4ContentEncoding
artifact.FileSize = 0
artifact.FileCompressedSize = 0
if err := actions.UpdateArtifactByID(ctx, artifact.ID, artifact); err != nil {
log.Error("Error UpdateArtifactByID: %v", err)
ctx.Error(http.StatusInternalServerError, "Error UpdateArtifactByID")
@@ -279,7 +292,7 @@ func (r *artifactV4Routes) createArtifact(ctx *ArtifactContext) {
respData := CreateArtifactResponse{
Ok: true,
SignedUploadUrl: r.buildArtifactURL(ctx, "UploadArtifact", artifactName, ctx.ActionTask.ID),
SignedUploadUrl: r.buildArtifactURL(ctx, "UploadArtifact", artifactName, ctx.ActionTask.ID, artifact.ID),
}
r.sendProtbufBody(ctx, &respData)
}
@@ -293,38 +306,77 @@ func (r *artifactV4Routes) uploadArtifact(ctx *ArtifactContext) {
comp := ctx.Req.URL.Query().Get("comp")
switch comp {
case "block", "appendBlock":
// get artifact by name
artifact, err := r.getArtifactByName(ctx, task.Job.RunID, artifactName)
if err != nil {
log.Error("Error artifact not found: %v", err)
ctx.Error(http.StatusNotFound, "Error artifact not found")
return
}
blockid := ctx.Req.URL.Query().Get("blockid")
if blockid == "" {
// get artifact by name
artifact, err := r.getArtifactByName(ctx, task.Job.RunID, artifactName)
if err != nil {
log.Error("Error artifact not found: %v", err)
ctx.Error(http.StatusNotFound, "Error artifact not found")
return
}
if comp == "block" {
artifact.FileSize = 0
artifact.FileCompressedSize = 0
_, err = appendUploadChunk(r.fs, ctx, artifact, artifact.FileSize, ctx.Req.ContentLength, artifact.RunID)
if err != nil {
log.Error("Error runner api getting task: task is not running")
ctx.Error(http.StatusInternalServerError, "Error runner api getting task: task is not running")
return
}
artifact.FileCompressedSize += ctx.Req.ContentLength
artifact.FileSize += ctx.Req.ContentLength
if err := actions.UpdateArtifactByID(ctx, artifact.ID, artifact); err != nil {
log.Error("Error UpdateArtifactByID: %v", err)
ctx.Error(http.StatusInternalServerError, "Error UpdateArtifactByID")
return
}
} else {
_, err := r.fs.Save(fmt.Sprintf("tmpv4%d/block-%d-%d-%s", task.Job.RunID, task.Job.RunID, ctx.Req.ContentLength, base64.URLEncoding.EncodeToString([]byte(blockid))), ctx.Req.Body, -1)
if err != nil {
log.Error("Error runner api getting task: task is not running")
ctx.Error(http.StatusInternalServerError, "Error runner api getting task: task is not running")
return
}
}
_, err = appendUploadChunk(r.fs, ctx, artifact, artifact.FileSize, ctx.Req.ContentLength, artifact.RunID)
ctx.JSON(http.StatusCreated, "appended")
case "blocklist":
rawArtifactID := ctx.Req.URL.Query().Get("artifactID")
artifactID, _ := strconv.ParseInt(rawArtifactID, 10, 64)
_, err := r.fs.Save(fmt.Sprintf("tmpv4%d/%d-%d-blocklist", task.Job.RunID, task.Job.RunID, artifactID), ctx.Req.Body, -1)
if err != nil {
log.Error("Error runner api getting task: task is not running")
ctx.Error(http.StatusInternalServerError, "Error runner api getting task: task is not running")
return
}
artifact.FileCompressedSize += ctx.Req.ContentLength
artifact.FileSize += ctx.Req.ContentLength
if err := actions.UpdateArtifactByID(ctx, artifact.ID, artifact); err != nil {
log.Error("Error UpdateArtifactByID: %v", err)
ctx.Error(http.StatusInternalServerError, "Error UpdateArtifactByID")
return
}
ctx.JSON(http.StatusCreated, "appended")
case "blocklist":
ctx.JSON(http.StatusCreated, "created")
}
}
type BlockList struct {
Latest []string `xml:"Latest"`
}
type Latest struct {
Value string `xml:",chardata"`
}
func (r *artifactV4Routes) readBlockList(runID, artifactID int64) (*BlockList, error) {
blockListName := fmt.Sprintf("tmpv4%d/%d-%d-blocklist", runID, runID, artifactID)
s, err := r.fs.Open(blockListName)
if err != nil {
return nil, err
}
xdec := xml.NewDecoder(s)
blockList := &BlockList{}
err = xdec.Decode(blockList)
delerr := r.fs.Delete(blockListName)
if delerr != nil {
log.Warn("Failed to delete blockList %s: %v", blockListName, delerr)
}
return blockList, err
}
func (r *artifactV4Routes) finalizeArtifact(ctx *ArtifactContext) {
var req FinalizeArtifactRequest
@@ -343,18 +395,34 @@ func (r *artifactV4Routes) finalizeArtifact(ctx *ArtifactContext) {
ctx.Error(http.StatusNotFound, "Error artifact not found")
return
}
chunkMap, err := listChunksByRunID(r.fs, runID)
var chunks []*chunkFileItem
blockList, err := r.readBlockList(runID, artifact.ID)
if err != nil {
log.Error("Error merge chunks: %v", err)
ctx.Error(http.StatusInternalServerError, "Error merge chunks")
return
}
chunks, ok := chunkMap[artifact.ID]
if !ok {
log.Error("Error merge chunks")
ctx.Error(http.StatusInternalServerError, "Error merge chunks")
return
log.Warn("Failed to read BlockList, fallback to old behavior: %v", err)
chunkMap, err := listChunksByRunID(r.fs, runID)
if err != nil {
log.Error("Error merge chunks: %v", err)
ctx.Error(http.StatusInternalServerError, "Error merge chunks")
return
}
chunks, ok = chunkMap[artifact.ID]
if !ok {
log.Error("Error merge chunks")
ctx.Error(http.StatusInternalServerError, "Error merge chunks")
return
}
} else {
chunks, err = listChunksByRunIDV4(r.fs, runID, artifact.ID, blockList)
if err != nil {
log.Error("Error merge chunks: %v", err)
ctx.Error(http.StatusInternalServerError, "Error merge chunks")
return
}
artifact.FileSize = chunks[len(chunks)-1].End + 1
artifact.FileCompressedSize = chunks[len(chunks)-1].End + 1
}
checksum := ""
if req.Hash != nil {
checksum = req.Hash.Value
@@ -449,13 +517,13 @@ func (r *artifactV4Routes) getSignedArtifactURL(ctx *ArtifactContext) {
respData := GetSignedArtifactURLResponse{}
if setting.Actions.ArtifactStorage.MinioConfig.ServeDirect {
u, err := storage.ActionsArtifacts.URL(artifact.StoragePath, artifact.ArtifactPath)
u, err := storage.ActionsArtifacts.URL(artifact.StoragePath, artifact.ArtifactPath, nil)
if u != nil && err == nil {
respData.SignedUrl = u.String()
}
}
if respData.SignedUrl == "" {
respData.SignedUrl = r.buildArtifactURL(ctx, "DownloadArtifact", artifactName, ctx.ActionTask.ID)
respData.SignedUrl = r.buildArtifactURL(ctx, "DownloadArtifact", artifactName, ctx.ActionTask.ID, artifact.ID)
}
r.sendProtbufBody(ctx, &respData)
}

View File

@@ -314,6 +314,7 @@ func CommonRoutes() *web.Route {
r.Get("/PACKAGES", cran.EnumerateSourcePackages)
r.Get("/PACKAGES{format}", cran.EnumerateSourcePackages)
r.Get("/{filename}", cran.DownloadSourcePackageFile)
r.Get("/Archive/{packagename}/{filename}", cran.DownloadSourcePackageFile)
})
r.Put("", reqPackageAccess(perm.AccessModeWrite), cran.UploadSourcePackageFile)
})
@@ -608,40 +609,46 @@ func CommonRoutes() *web.Route {
}, reqPackageAccess(perm.AccessModeWrite))
}, reqPackageAccess(perm.AccessModeRead))
r.Group("/swift", func() {
r.Group("/{scope}/{name}", func() {
r.Group("", func() {
r.Get("", swift.EnumeratePackageVersions)
r.Get(".json", swift.EnumeratePackageVersions)
}, swift.CheckAcceptMediaType(swift.AcceptJSON))
r.Group("/{version}", func() {
r.Get("/Package.swift", swift.CheckAcceptMediaType(swift.AcceptSwift), swift.DownloadManifest)
r.Put("", reqPackageAccess(perm.AccessModeWrite), swift.CheckAcceptMediaType(swift.AcceptJSON), swift.UploadPackageFile)
r.Get("", func(ctx *context.Context) {
// Can't use normal routes here: https://github.com/go-chi/chi/issues/781
r.Group("", func() { // Needs to be unauthenticated.
r.Post("", swift.CheckAuthenticate)
r.Post("/login", swift.CheckAuthenticate)
})
r.Group("", func() {
r.Group("/{scope}/{name}", func() {
r.Group("", func() {
r.Get("", swift.EnumeratePackageVersions)
r.Get(".json", swift.EnumeratePackageVersions)
}, swift.CheckAcceptMediaType(swift.AcceptJSON))
r.Group("/{version}", func() {
r.Get("/Package.swift", swift.CheckAcceptMediaType(swift.AcceptSwift), swift.DownloadManifest)
r.Put("", reqPackageAccess(perm.AccessModeWrite), swift.CheckAcceptMediaType(swift.AcceptJSON), swift.UploadPackageFile)
r.Get("", func(ctx *context.Context) {
// Can't use normal routes here: https://github.com/go-chi/chi/issues/781
version := ctx.Params("version")
if strings.HasSuffix(version, ".zip") {
swift.CheckAcceptMediaType(swift.AcceptZip)(ctx)
if ctx.Written() {
return
version := ctx.Params("version")
if strings.HasSuffix(version, ".zip") {
swift.CheckAcceptMediaType(swift.AcceptZip)(ctx)
if ctx.Written() {
return
}
ctx.SetParams("version", version[:len(version)-4])
swift.DownloadPackageFile(ctx)
} else {
swift.CheckAcceptMediaType(swift.AcceptJSON)(ctx)
if ctx.Written() {
return
}
if strings.HasSuffix(version, ".json") {
ctx.SetParams("version", version[:len(version)-5])
}
swift.PackageVersionMetadata(ctx)
}
ctx.SetParams("version", version[:len(version)-4])
swift.DownloadPackageFile(ctx)
} else {
swift.CheckAcceptMediaType(swift.AcceptJSON)(ctx)
if ctx.Written() {
return
}
if strings.HasSuffix(version, ".json") {
ctx.SetParams("version", version[:len(version)-5])
}
swift.PackageVersionMetadata(ctx)
}
})
})
})
})
r.Get("/identifiers", swift.CheckAcceptMediaType(swift.AcceptJSON), swift.LookupPackageIdentifiers)
}, reqPackageAccess(perm.AccessModeRead))
r.Get("/identifiers", swift.CheckAcceptMediaType(swift.AcceptJSON), swift.LookupPackageIdentifiers)
}, reqPackageAccess(perm.AccessModeRead))
})
r.Group("/vagrant", func() {
r.Group("/authenticate", func() {
r.Get("", vagrant.CheckAuthenticate)

View File

@@ -715,7 +715,9 @@ func DeleteManifest(ctx *context.Context) {
}
func serveBlob(ctx *context.Context, pfd *packages_model.PackageFileDescriptor) {
s, u, _, err := packages_service.GetPackageBlobStream(ctx, pfd.File, pfd.Blob)
serveDirectReqParams := make(url.Values)
serveDirectReqParams.Set("response-content-type", pfd.Properties.GetByName(container_module.PropertyMediaType))
s, u, _, err := packages_service.GetPackageBlobStream(ctx, pfd.File, pfd.Blob, serveDirectReqParams)
if err != nil {
apiError(ctx, http.StatusInternalServerError, err)
return

View File

@@ -215,7 +215,7 @@ func servePackageFile(ctx *context.Context, params parameters, serveContent bool
return
}
s, u, _, err := packages_service.GetPackageBlobStream(ctx, pf, pb)
s, u, _, err := packages_service.GetPackageBlobStream(ctx, pf, pb, nil)
if err != nil {
apiError(ctx, http.StatusInternalServerError, err)
return

View File

@@ -27,7 +27,7 @@ import (
"github.com/hashicorp/go-version"
)
// https://github.com/apple/swift-package-manager/blob/main/Documentation/Registry.md#35-api-versioning
// https://github.com/swiftlang/swift-package-manager/blob/main/Documentation/PackageRegistry/Registry.md#35-api-versioning
const (
AcceptJSON = "application/vnd.swift.registry.v1+json"
AcceptSwift = "application/vnd.swift.registry.v1+swift"
@@ -35,9 +35,9 @@ const (
)
var (
// https://github.com/apple/swift-package-manager/blob/main/Documentation/Registry.md#361-package-scope
// https://github.com/swiftlang/swift-package-manager/blob/main/Documentation/PackageRegistry/Registry.md#361-package-scope
scopePattern = regexp.MustCompile(`\A[a-zA-Z0-9][a-zA-Z0-9-]{0,38}\z`)
// https://github.com/apple/swift-package-manager/blob/main/Documentation/Registry.md#362-package-name
// https://github.com/swiftlang/swift-package-manager/blob/main/Documentation/PackageRegistry/Registry.md#362-package-name
namePattern = regexp.MustCompile(`\A[a-zA-Z0-9][a-zA-Z0-9-_]{0,99}\z`)
)
@@ -49,7 +49,7 @@ type headers struct {
Link string
}
// https://github.com/apple/swift-package-manager/blob/main/Documentation/Registry.md#35-api-versioning
// https://github.com/swiftlang/swift-package-manager/blob/main/Documentation/PackageRegistry/Registry.md#35-api-versioning
func setResponseHeaders(resp http.ResponseWriter, h *headers) {
if h.ContentType != "" {
resp.Header().Set("Content-Type", h.ContentType)
@@ -69,7 +69,7 @@ func setResponseHeaders(resp http.ResponseWriter, h *headers) {
}
}
// https://github.com/apple/swift-package-manager/blob/main/Documentation/Registry.md#33-error-handling
// https://github.com/swiftlang/swift-package-manager/blob/main/Documentation/PackageRegistry/Registry.md#33-error-handling
func apiError(ctx *context.Context, status int, obj any) {
// https://www.rfc-editor.org/rfc/rfc7807
type Problem struct {
@@ -91,7 +91,7 @@ func apiError(ctx *context.Context, status int, obj any) {
})
}
// https://github.com/apple/swift-package-manager/blob/main/Documentation/Registry.md#35-api-versioning
// https://github.com/swiftlang/swift-package-manager/blob/main/Documentation/PackageRegistry/Registry.md#35-api-versioning
func CheckAcceptMediaType(requiredAcceptHeader string) func(ctx *context.Context) {
return func(ctx *context.Context) {
accept := ctx.Req.Header.Get("Accept")
@@ -101,6 +101,16 @@ func CheckAcceptMediaType(requiredAcceptHeader string) func(ctx *context.Context
}
}
// https://github.com/swiftlang/swift-package-manager/blob/main/Documentation/PackageRegistry/PackageRegistryUsage.md#registry-authentication
func CheckAuthenticate(ctx *context.Context) {
if ctx.Doer == nil {
apiError(ctx, http.StatusUnauthorized, nil)
return
}
ctx.Status(http.StatusOK)
}
func buildPackageID(scope, name string) string {
return scope + "." + name
}
@@ -113,7 +123,7 @@ type EnumeratePackageVersionsResponse struct {
Releases map[string]Release `json:"releases"`
}
// https://github.com/apple/swift-package-manager/blob/main/Documentation/Registry.md#41-list-package-releases
// https://github.com/swiftlang/swift-package-manager/blob/main/Documentation/PackageRegistry/Registry.md#41-list-package-releases
func EnumeratePackageVersions(ctx *context.Context) {
packageScope := ctx.Params("scope")
packageName := ctx.Params("name")
@@ -170,7 +180,7 @@ type PackageVersionMetadataResponse struct {
Metadata *swift_module.SoftwareSourceCode `json:"metadata"`
}
// https://github.com/apple/swift-package-manager/blob/main/Documentation/Registry.md#endpoint-2
// https://github.com/swiftlang/swift-package-manager/blob/main/Documentation/PackageRegistry/Registry.md#endpoint-2
func PackageVersionMetadata(ctx *context.Context) {
id := buildPackageID(ctx.Params("scope"), ctx.Params("name"))
@@ -228,7 +238,7 @@ func PackageVersionMetadata(ctx *context.Context) {
})
}
// https://github.com/apple/swift-package-manager/blob/main/Documentation/Registry.md#43-fetch-manifest-for-a-package-release
// https://github.com/swiftlang/swift-package-manager/blob/main/Documentation/PackageRegistry/Registry.md#43-fetch-manifest-for-a-package-release
func DownloadManifest(ctx *context.Context) {
packageScope := ctx.Params("scope")
packageName := ctx.Params("name")
@@ -280,7 +290,7 @@ func DownloadManifest(ctx *context.Context) {
})
}
// https://github.com/apple/swift-package-manager/blob/main/Documentation/Registry.md#endpoint-6
// https://github.com/swiftlang/swift-package-manager/blob/main/Documentation/PackageRegistry/Registry.md#endpoint-6
func UploadPackageFile(ctx *context.Context) {
packageScope := ctx.Params("scope")
packageName := ctx.Params("name")
@@ -379,7 +389,7 @@ func UploadPackageFile(ctx *context.Context) {
ctx.Status(http.StatusCreated)
}
// https://github.com/apple/swift-package-manager/blob/main/Documentation/Registry.md#endpoint-4
// https://github.com/swiftlang/swift-package-manager/blob/main/Documentation/PackageRegistry/Registry.md#endpoint-4
func DownloadPackageFile(ctx *context.Context) {
pv, err := packages_model.GetVersionByNameAndVersion(ctx, ctx.Package.Owner.ID, packages_model.TypeSwift, buildPackageID(ctx.Params("scope"), ctx.Params("name")), ctx.Params("version"))
if err != nil {
@@ -420,7 +430,7 @@ type LookupPackageIdentifiersResponse struct {
Identifiers []string `json:"identifiers"`
}
// https://github.com/apple/swift-package-manager/blob/main/Documentation/Registry.md#endpoint-5
// https://github.com/swiftlang/swift-package-manager/blob/main/Documentation/PackageRegistry/Registry.md#endpoint-5
func LookupPackageIdentifiers(ctx *context.Context) {
url := ctx.FormTrim("url")
if url == "" {

View File

@@ -356,12 +356,20 @@ func reqToken() func(ctx *context.APIContext) {
func reqExploreSignIn() func(ctx *context.APIContext) {
return func(ctx *context.APIContext) {
if setting.Service.Explore.RequireSigninView && !ctx.IsSigned {
if (setting.Service.RequireSignInView || setting.Service.Explore.RequireSigninView) && !ctx.IsSigned {
ctx.Error(http.StatusUnauthorized, "reqExploreSignIn", "you must be signed in to search for users")
}
}
}
func reqUsersExploreEnabled() func(ctx *context.APIContext) {
return func(ctx *context.APIContext) {
if setting.Service.Explore.DisableUsersPage {
ctx.NotFound()
}
}
}
func reqBasicOrRevProxyAuth() func(ctx *context.APIContext) {
return func(ctx *context.APIContext) {
if ctx.IsSigned && setting.Service.EnableReverseProxyAuthAPI && ctx.Data["AuthedMethod"].(string) == auth.ReverseProxyMethodName {
@@ -955,7 +963,7 @@ func Routes() *web.Route {
// Users (requires user scope)
m.Group("/users", func() {
m.Get("/search", reqExploreSignIn(), user.Search)
m.Get("/search", reqExploreSignIn(), reqUsersExploreEnabled(), user.Search)
m.Group("/{username}", func() {
m.Get("", reqExploreSignIn(), user.GetInfo)

View File

@@ -155,11 +155,6 @@ func DeleteBranch(ctx *context.APIContext) {
}
}
if ctx.Repo.Repository.IsMirror {
ctx.Error(http.StatusForbidden, "IsMirrored", fmt.Errorf("can not delete branch of an mirror repository"))
return
}
if err := repo_service.DeleteBranch(ctx, ctx.Doer, ctx.Repo.Repository, ctx.Repo.GitRepo, branchName); err != nil {
switch {
case git.IsErrBranchNotExist(err):

View File

@@ -18,6 +18,8 @@ import (
"code.gitea.io/gitea/routers/api/v1/utils"
"code.gitea.io/gitea/services/context"
"code.gitea.io/gitea/services/convert"
issue_service "code.gitea.io/gitea/services/issue"
pull_service "code.gitea.io/gitea/services/pull"
repo_service "code.gitea.io/gitea/services/repository"
)
@@ -323,7 +325,13 @@ func GetReviewers(ctx *context.APIContext) {
// "404":
// "$ref": "#/responses/notFound"
reviewers, err := repo_model.GetReviewers(ctx, ctx.Repo.Repository, ctx.Doer.ID, 0)
canChooseReviewer := issue_service.CanDoerChangeReviewRequests(ctx, ctx.Doer, ctx.Repo.Repository, 0)
if !canChooseReviewer {
ctx.Error(http.StatusForbidden, "GetReviewers", errors.New("doer has no permission to get reviewers"))
return
}
reviewers, err := pull_service.GetReviewers(ctx, ctx.Repo.Repository, ctx.Doer.ID, 0)
if err != nil {
ctx.Error(http.StatusInternalServerError, "ListCollaborators", err)
return

View File

@@ -203,7 +203,7 @@ func GetRawFileOrLFS(ctx *context.APIContext) {
if setting.LFS.Storage.MinioConfig.ServeDirect {
// If we have a signed url (S3, object storage), redirect to this directly.
u, err := storage.LFS.URL(pointer.RelativePath(), blob.Name())
u, err := storage.LFS.URL(pointer.RelativePath(), blob.Name(), nil)
if u != nil && err == nil {
ctx.Redirect(u.String())
return
@@ -328,7 +328,7 @@ func download(ctx *context.APIContext, archiveName string, archiver *repo_model.
rPath := archiver.RelativePath()
if setting.RepoArchive.Storage.MinioConfig.ServeDirect {
// If we have a signed url (S3, object storage), redirect to this directly.
u, err := storage.RepoArchives.URL(rPath, downloadName)
u, err := storage.RepoArchives.URL(rPath, downloadName, nil)
if u != nil && err == nil {
ctx.Redirect(u.String())
return

View File

@@ -55,11 +55,20 @@ func ListForks(ctx *context.APIContext) {
// "404":
// "$ref": "#/responses/notFound"
forks, err := repo_model.GetForks(ctx, ctx.Repo.Repository, utils.GetListOptions(ctx))
forks, total, err := repo_service.FindForks(ctx, ctx.Repo.Repository, ctx.Doer, utils.GetListOptions(ctx))
if err != nil {
ctx.Error(http.StatusInternalServerError, "GetForks", err)
ctx.Error(http.StatusInternalServerError, "FindForks", err)
return
}
if err := repo_model.RepositoryList(forks).LoadOwners(ctx); err != nil {
ctx.Error(http.StatusInternalServerError, "LoadOwners", err)
return
}
if err := repo_model.RepositoryList(forks).LoadUnits(ctx); err != nil {
ctx.Error(http.StatusInternalServerError, "LoadUnits", err)
return
}
apiForks := make([]*api.Repository, len(forks))
for i, fork := range forks {
permission, err := access_model.GetUserRepoPermission(ctx, fork, ctx.Doer)
@@ -70,7 +79,7 @@ func ListForks(ctx *context.APIContext) {
apiForks[i] = convert.ToRepo(ctx, fork, permission)
}
ctx.SetTotalCountHeader(int64(ctx.Repo.Repository.NumForks))
ctx.SetTotalCountHeader(total)
ctx.JSON(http.StatusOK, apiForks)
}

View File

@@ -41,80 +41,93 @@ func SearchIssues(ctx *context.APIContext) {
// parameters:
// - name: state
// in: query
// description: whether issue is open or closed
// description: State of the issue
// type: string
// enum: [open, closed, all]
// default: open
// - name: labels
// in: query
// description: comma separated list of labels. Fetch only issues that have any of this labels. Non existent labels are discarded
// description: Comma-separated list of label names. Fetch only issues that have any of these labels. Non existent labels are discarded.
// type: string
// - name: milestones
// in: query
// description: comma separated list of milestone names. Fetch only issues that have any of this milestones. Non existent are discarded
// description: Comma-separated list of milestone names. Fetch only issues that have any of these milestones. Non existent milestones are discarded.
// type: string
// - name: q
// in: query
// description: search string
// description: Search string
// type: string
// - name: priority_repo_id
// in: query
// description: repository to prioritize in the results
// description: Repository ID to prioritize in the results
// type: integer
// format: int64
// - name: type
// in: query
// description: filter by type (issues / pulls) if set
// description: Filter by issue type
// type: string
// enum: [issues, pulls]
// - name: since
// in: query
// description: Only show notifications updated after the given time. This is a timestamp in RFC 3339 format
// description: Only show issues updated after the given time (RFC 3339 format)
// type: string
// format: date-time
// required: false
// - name: before
// in: query
// description: Only show notifications updated before the given time. This is a timestamp in RFC 3339 format
// description: Only show issues updated before the given time (RFC 3339 format)
// type: string
// format: date-time
// required: false
// - name: assigned
// in: query
// description: filter (issues / pulls) assigned to you, default is false
// description: Filter issues or pulls assigned to the authenticated user
// type: boolean
// default: false
// - name: created
// in: query
// description: filter (issues / pulls) created by you, default is false
// description: Filter issues or pulls created by the authenticated user
// type: boolean
// default: false
// - name: mentioned
// in: query
// description: filter (issues / pulls) mentioning you, default is false
// description: Filter issues or pulls mentioning the authenticated user
// type: boolean
// default: false
// - name: review_requested
// in: query
// description: filter pulls requesting your review, default is false
// description: Filter pull requests where the authenticated user's review was requested
// type: boolean
// default: false
// - name: reviewed
// in: query
// description: filter pulls reviewed by you, default is false
// description: Filter pull requests reviewed by the authenticated user
// type: boolean
// default: false
// - name: owner
// in: query
// description: filter by owner
// description: Filter by repository owner
// type: string
// - name: team
// in: query
// description: filter by team (requires organization owner parameter to be provided)
// description: Filter by team (requires organization owner parameter)
// type: string
// - name: page
// in: query
// description: page number of results to return (1-based)
// description: Page number of results to return (1-based)
// type: integer
// minimum: 1
// default: 1
// - name: limit
// in: query
// description: page size of results
// description: Number of items per page
// type: integer
// minimum: 0
// responses:
// "200":
// "$ref": "#/responses/IssueList"
// "400":
// "$ref": "#/responses/error"
// "422":
// "$ref": "#/responses/validationError"
before, since, err := context.GetQueryBeforeSince(ctx.Base)
if err != nil {

View File

@@ -319,6 +319,11 @@ func prepareForReplaceOrAdd(ctx *context.APIContext, form api.IssueLabelsOption)
return nil, nil, err
}
if !ctx.Repo.CanWriteIssuesOrPulls(issue.IsPull) {
ctx.Error(http.StatusForbidden, "CanWriteIssuesOrPulls", "write permission is required")
return nil, nil, fmt.Errorf("permission denied")
}
var (
labelIDs []int64
labelNames []string
@@ -350,10 +355,5 @@ func prepareForReplaceOrAdd(ctx *context.APIContext, form api.IssueLabelsOption)
return nil, nil, err
}
if !ctx.Repo.CanWriteIssuesOrPulls(issue.IsPull) {
ctx.Status(http.StatusForbidden)
return nil, nil, nil
}
return issue, labels, err
}

View File

@@ -1000,49 +1000,54 @@ func MergePullRequest(ctx *context.APIContext) {
}
log.Trace("Pull request merged: %d", pr.ID)
if form.DeleteBranchAfterMerge {
// Don't cleanup when there are other PR's that use this branch as head branch.
exist, err := issues_model.HasUnmergedPullRequestsByHeadInfo(ctx, pr.HeadRepoID, pr.HeadBranch)
if err != nil {
ctx.ServerError("HasUnmergedPullRequestsByHeadInfo", err)
return
}
if exist {
ctx.Status(http.StatusOK)
return
}
var headRepo *git.Repository
if ctx.Repo != nil && ctx.Repo.Repository != nil && ctx.Repo.Repository.ID == pr.HeadRepoID && ctx.Repo.GitRepo != nil {
headRepo = ctx.Repo.GitRepo
} else {
headRepo, err = gitrepo.OpenRepository(ctx, pr.HeadRepo)
// for agit flow, we should not delete the agit reference after merge
if form.DeleteBranchAfterMerge && pr.Flow == issues_model.PullRequestFlowGithub {
// check permission even it has been checked in repo_service.DeleteBranch so that we don't need to
// do RetargetChildrenOnMerge
if err := repo_service.CanDeleteBranch(ctx, pr.HeadRepo, pr.HeadBranch, ctx.Doer); err == nil {
// Don't cleanup when there are other PR's that use this branch as head branch.
exist, err := issues_model.HasUnmergedPullRequestsByHeadInfo(ctx, pr.HeadRepoID, pr.HeadBranch)
if err != nil {
ctx.ServerError(fmt.Sprintf("OpenRepository[%s]", pr.HeadRepo.FullName()), err)
ctx.ServerError("HasUnmergedPullRequestsByHeadInfo", err)
return
}
defer headRepo.Close()
}
if err := pull_service.RetargetChildrenOnMerge(ctx, ctx.Doer, pr); err != nil {
ctx.Error(http.StatusInternalServerError, "RetargetChildrenOnMerge", err)
return
}
if err := repo_service.DeleteBranch(ctx, ctx.Doer, pr.HeadRepo, headRepo, pr.HeadBranch); err != nil {
switch {
case git.IsErrBranchNotExist(err):
ctx.NotFound(err)
case errors.Is(err, repo_service.ErrBranchIsDefault):
ctx.Error(http.StatusForbidden, "DefaultBranch", fmt.Errorf("can not delete default branch"))
case errors.Is(err, git_model.ErrBranchIsProtected):
ctx.Error(http.StatusForbidden, "IsProtectedBranch", fmt.Errorf("branch protected"))
default:
ctx.Error(http.StatusInternalServerError, "DeleteBranch", err)
if exist {
ctx.Status(http.StatusOK)
return
}
var headRepo *git.Repository
if ctx.Repo != nil && ctx.Repo.Repository != nil && ctx.Repo.Repository.ID == pr.HeadRepoID && ctx.Repo.GitRepo != nil {
headRepo = ctx.Repo.GitRepo
} else {
headRepo, err = gitrepo.OpenRepository(ctx, pr.HeadRepo)
if err != nil {
ctx.ServerError(fmt.Sprintf("OpenRepository[%s]", pr.HeadRepo.FullName()), err)
return
}
defer headRepo.Close()
}
if err := pull_service.RetargetChildrenOnMerge(ctx, ctx.Doer, pr); err != nil {
ctx.Error(http.StatusInternalServerError, "RetargetChildrenOnMerge", err)
return
}
if err := repo_service.DeleteBranch(ctx, ctx.Doer, pr.HeadRepo, headRepo, pr.HeadBranch); err != nil {
switch {
case git.IsErrBranchNotExist(err):
ctx.NotFound(err)
case errors.Is(err, repo_service.ErrBranchIsDefault):
ctx.Error(http.StatusForbidden, "DefaultBranch", fmt.Errorf("can not delete default branch"))
case errors.Is(err, git_model.ErrBranchIsProtected):
ctx.Error(http.StatusForbidden, "IsProtectedBranch", fmt.Errorf("branch protected"))
default:
ctx.Error(http.StatusInternalServerError, "DeleteBranch", err)
}
return
}
if err := issues_model.AddDeletePRBranchComment(ctx, ctx.Doer, pr.BaseRepo, pr.Issue.ID, pr.HeadBranch); err != nil {
// Do not fail here as branch has already been deleted
log.Error("DeleteBranch: %v", err)
}
return
}
if err := issues_model.AddDeletePRBranchComment(ctx, ctx.Doer, pr.BaseRepo, pr.Issue.ID, pr.HeadBranch); err != nil {
// Do not fail here as branch has already been deleted
log.Error("DeleteBranch: %v", err)
}
}
@@ -1103,9 +1108,20 @@ func parseCompareInfo(ctx *context.APIContext, form api.CreatePullRequestOption)
// Check if current user has fork of repository or in the same repository.
headRepo := repo_model.GetForkedRepo(ctx, headUser.ID, baseRepo.ID)
if headRepo == nil && !isSameRepo {
log.Trace("parseCompareInfo[%d]: does not have fork or in same repository", baseRepo.ID)
ctx.NotFound("GetForkedRepo")
return nil, nil, nil, nil, "", ""
err := baseRepo.GetBaseRepo(ctx)
if err != nil {
ctx.Error(http.StatusInternalServerError, "GetBaseRepo", err)
return nil, nil, nil, nil, "", ""
}
// Check if baseRepo's base repository is the same as headUser's repository.
if baseRepo.BaseRepo == nil || baseRepo.BaseRepo.OwnerID != headUser.ID {
log.Trace("parseCompareInfo[%d]: does not have fork or in same repository", baseRepo.ID)
ctx.NotFound("GetBaseRepo")
return nil, nil, nil, nil, "", ""
}
// Assign headRepo so it can be used below.
headRepo = baseRepo.BaseRepo
}
var headGitRepo *git.Repository

View File

@@ -5,6 +5,7 @@
package private
import (
"crypto/subtle"
"net/http"
"strings"
@@ -18,22 +19,23 @@ import (
chi_middleware "github.com/go-chi/chi/v5/middleware"
)
// CheckInternalToken check internal token is set
func CheckInternalToken(next http.Handler) http.Handler {
func authInternal(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
tokens := req.Header.Get("Authorization")
fields := strings.SplitN(tokens, " ", 2)
if setting.InternalToken == "" {
log.Warn(`The INTERNAL_TOKEN setting is missing from the configuration file: %q, internal API can't work.`, setting.CustomConf)
http.Error(w, http.StatusText(http.StatusForbidden), http.StatusForbidden)
return
}
if len(fields) != 2 || fields[0] != "Bearer" || fields[1] != setting.InternalToken {
tokens := req.Header.Get("X-Gitea-Internal-Auth") // TODO: use something like JWT or HMAC to avoid passing the token in the clear
after, found := strings.CutPrefix(tokens, "Bearer ")
authSucceeded := found && subtle.ConstantTimeCompare([]byte(after), []byte(setting.InternalToken)) == 1
if !authSucceeded {
log.Debug("Forbidden attempt to access internal url: Authorization header: %s", tokens)
http.Error(w, http.StatusText(http.StatusForbidden), http.StatusForbidden)
} else {
next.ServeHTTP(w, req)
return
}
next.ServeHTTP(w, req)
})
}
@@ -51,7 +53,7 @@ func bind[T any](_ T) any {
func Routes() *web.Route {
r := web.NewRoute()
r.Use(context.PrivateContexter())
r.Use(CheckInternalToken)
r.Use(authInternal)
// Log the real ip address of the request from SSH is really helpful for diagnosing sometimes.
// Since internal API will be sent only from Gitea sub commands and it's under control (checked by InternalToken), we can trust the headers.
r.Use(chi_middleware.RealIP)

View File

@@ -953,6 +953,8 @@ func SignInOAuthCallback(ctx *context.Context) {
}
if err, ok := err.(*go_oauth2.RetrieveError); ok {
ctx.Flash.Error("OAuth2 RetrieveError: "+err.Error(), true)
ctx.Redirect(setting.AppSubURL + "/user/login")
return
}
ctx.ServerError("UserSignIn", err)
return

View File

@@ -39,7 +39,7 @@ func storageHandler(storageSetting *setting.Storage, prefix string, objStore sto
rPath := strings.TrimPrefix(req.URL.Path, "/"+prefix+"/")
rPath = util.PathJoinRelX(rPath)
u, err := objStore.URL(rPath, path.Base(rPath))
u, err := objStore.URL(rPath, path.Base(rPath), nil)
if err != nil {
if os.IsNotExist(err) || errors.Is(err, os.ErrNotExist) {
log.Warn("Unable to find %s %s", prefix, rPath)

View File

@@ -9,6 +9,7 @@ import (
"code.gitea.io/gitea/modules/container"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/util"
"code.gitea.io/gitea/services/context"
)
@@ -33,7 +34,7 @@ func Organizations(ctx *context.Context) {
)
sortOrder := ctx.FormString("sort")
if sortOrder == "" {
sortOrder = "newest"
sortOrder = util.Iif(supportedSortOrders.Contains(setting.UI.ExploreDefaultSort), setting.UI.ExploreDefaultSort, "newest")
ctx.SetFormString("sort", sortOrder)
}

View File

@@ -16,6 +16,7 @@ import (
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/sitemap"
"code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/util"
"code.gitea.io/gitea/services/context"
)
@@ -147,7 +148,7 @@ func Users(ctx *context.Context) {
)
sortOrder := ctx.FormString("sort")
if sortOrder == "" {
sortOrder = "newest"
sortOrder = util.Iif(supportedSortOrders.Contains(setting.UI.ExploreDefaultSort), setting.UI.ExploreDefaultSort, "newest")
ctx.SetFormString("sort", sortOrder)
}

View File

@@ -626,7 +626,7 @@ func ArtifactsDownloadView(ctx *context_module.Context) {
if len(artifacts) == 1 && artifacts[0].ArtifactName+".zip" == artifacts[0].ArtifactPath && artifacts[0].ContentEncoding == "application/zip" {
art := artifacts[0]
if setting.Actions.ArtifactStorage.MinioConfig.ServeDirect {
u, err := storage.ActionsArtifacts.URL(art.StoragePath, art.ArtifactPath)
u, err := storage.ActionsArtifacts.URL(art.StoragePath, art.ArtifactPath, nil)
if u != nil && err == nil {
ctx.Redirect(u.String())
return

View File

@@ -129,7 +129,7 @@ func ServeAttachment(ctx *context.Context, uuid string) {
if setting.Attachment.Storage.MinioConfig.ServeDirect {
// If we have a signed url (S3, object storage), redirect to this directly.
u, err := storage.Attachments.URL(attach.RelativePath(), attach.Name)
u, err := storage.Attachments.URL(attach.RelativePath(), attach.Name, nil)
if u != nil && err == nil {
ctx.Redirect(u.String())

View File

@@ -55,7 +55,7 @@ func ServeBlobOrLFS(ctx *context.Context, blob *git.Blob, lastModified *time.Tim
if setting.LFS.Storage.MinioConfig.ServeDirect {
// If we have a signed url (S3, object storage), redirect to this directly.
u, err := storage.LFS.URL(pointer.RelativePath(), blob.Name())
u, err := storage.LFS.URL(pointer.RelativePath(), blob.Name(), nil)
if u != nil && err == nil {
ctx.Redirect(u.String())
return nil

View File

@@ -56,7 +56,6 @@ import (
"code.gitea.io/gitea/services/forms"
issue_service "code.gitea.io/gitea/services/issue"
pull_service "code.gitea.io/gitea/services/pull"
repo_service "code.gitea.io/gitea/services/repository"
user_service "code.gitea.io/gitea/services/user"
)
@@ -693,13 +692,13 @@ func RetrieveRepoReviewers(ctx *context.Context, repo *repo_model.Repository, is
posterID = 0
}
reviewers, err = repo_model.GetReviewers(ctx, repo, ctx.Doer.ID, posterID)
reviewers, err = pull_service.GetReviewers(ctx, repo, ctx.Doer.ID, posterID)
if err != nil {
ctx.ServerError("GetReviewers", err)
return
}
teamReviewers, err = repo_service.GetReviewerTeams(ctx, repo)
teamReviewers, err = pull_service.GetReviewerTeams(ctx, repo)
if err != nil {
ctx.ServerError("GetReviewerTeams", err)
return
@@ -1536,7 +1535,7 @@ func ViewIssue(ctx *context.Context) {
if issue.IsPull {
canChooseReviewer := false
if ctx.Doer != nil && ctx.IsSigned {
canChooseReviewer = issue_service.CanDoerChangeReviewRequests(ctx, ctx.Doer, repo, issue)
canChooseReviewer = issue_service.CanDoerChangeReviewRequests(ctx, ctx.Doer, repo, issue.PosterID)
}
RetrieveRepoReviewers(ctx, repo, issue, canChooseReviewer)

View File

@@ -1160,32 +1160,34 @@ func MergePullRequest(ctx *context.Context) {
log.Trace("Pull request merged: %d", pr.ID)
if form.DeleteBranchAfterMerge {
// Don't cleanup when other pr use this branch as head branch
exist, err := issues_model.HasUnmergedPullRequestsByHeadInfo(ctx, pr.HeadRepoID, pr.HeadBranch)
if err != nil {
ctx.ServerError("HasUnmergedPullRequestsByHeadInfo", err)
return
}
if exist {
ctx.JSONRedirect(issue.Link())
return
}
var headRepo *git.Repository
if ctx.Repo != nil && ctx.Repo.Repository != nil && pr.HeadRepoID == ctx.Repo.Repository.ID && ctx.Repo.GitRepo != nil {
headRepo = ctx.Repo.GitRepo
} else {
headRepo, err = gitrepo.OpenRepository(ctx, pr.HeadRepo)
if err != nil {
ctx.ServerError(fmt.Sprintf("OpenRepository[%s]", pr.HeadRepo.FullName()), err)
return
}
defer headRepo.Close()
}
deleteBranch(ctx, pr, headRepo)
if !form.DeleteBranchAfterMerge {
ctx.JSONRedirect(issue.Link())
return
}
// Don't cleanup when other pr use this branch as head branch
exist, err := issues_model.HasUnmergedPullRequestsByHeadInfo(ctx, pr.HeadRepoID, pr.HeadBranch)
if err != nil {
ctx.ServerError("HasUnmergedPullRequestsByHeadInfo", err)
return
}
if exist {
ctx.JSONRedirect(issue.Link())
return
}
var headRepo *git.Repository
if ctx.Repo != nil && ctx.Repo.Repository != nil && pr.HeadRepoID == ctx.Repo.Repository.ID && ctx.Repo.GitRepo != nil {
headRepo = ctx.Repo.GitRepo
} else {
headRepo, err = gitrepo.OpenRepository(ctx, pr.HeadRepo)
if err != nil {
ctx.ServerError(fmt.Sprintf("OpenRepository[%s]", pr.HeadRepo.FullName()), err)
return
}
defer headRepo.Close()
}
deleteBranch(ctx, pr, headRepo)
ctx.JSONRedirect(issue.Link())
}
@@ -1367,8 +1369,8 @@ func CleanUpPullRequest(ctx *context.Context) {
pr := issue.PullRequest
// Don't cleanup unmerged and unclosed PRs
if !pr.HasMerged && !issue.IsClosed {
// Don't cleanup unmerged and unclosed PRs and agit PRs
if !pr.HasMerged && !issue.IsClosed && pr.Flow != issues_model.PullRequestFlowGithub {
ctx.NotFound("CleanUpPullRequest", nil)
return
}
@@ -1399,13 +1401,12 @@ func CleanUpPullRequest(ctx *context.Context) {
return
}
perm, err := access_model.GetUserRepoPermission(ctx, pr.HeadRepo, ctx.Doer)
if err != nil {
ctx.ServerError("GetUserRepoPermission", err)
return
}
if !perm.CanWrite(unit.TypeCode) {
ctx.NotFound("CleanUpPullRequest", nil)
if err := repo_service.CanDeleteBranch(ctx, pr.HeadRepo, pr.HeadBranch, ctx.Doer); err != nil {
if errors.Is(err, util.ErrPermissionDenied) {
ctx.NotFound("CanDeleteBranch", nil)
} else {
ctx.ServerError("CanDeleteBranch", err)
}
return
}

View File

@@ -494,7 +494,7 @@ func download(ctx *context.Context, archiveName string, archiver *repo_model.Rep
rPath := archiver.RelativePath()
if setting.RepoArchive.Storage.MinioConfig.ServeDirect {
// If we have a signed url (S3, object storage), redirect to this directly.
u, err := storage.RepoArchives.URL(rPath, downloadName)
u, err := storage.RepoArchives.URL(rPath, downloadName, nil)
if u != nil && err == nil {
ctx.Redirect(u.String())
return

View File

@@ -8,7 +8,6 @@ import (
"errors"
"fmt"
"net/http"
"strconv"
"strings"
"time"
@@ -298,8 +297,8 @@ func SettingsPost(ctx *context.Context) {
return
}
m, err := selectPushMirrorByForm(ctx, form, repo)
if err != nil {
m, _, _ := repo_model.GetPushMirrorByIDAndRepoID(ctx, form.PushMirrorID, repo.ID)
if m == nil {
ctx.NotFound("", nil)
return
}
@@ -325,15 +324,13 @@ func SettingsPost(ctx *context.Context) {
return
}
id, err := strconv.ParseInt(form.PushMirrorID, 10, 64)
if err != nil {
ctx.ServerError("UpdatePushMirrorIntervalPushMirrorID", err)
m, _, _ := repo_model.GetPushMirrorByIDAndRepoID(ctx, form.PushMirrorID, repo.ID)
if m == nil {
ctx.NotFound("", nil)
return
}
m := &repo_model.PushMirror{
ID: id,
Interval: interval,
}
m.Interval = interval
if err := repo_model.UpdatePushMirrorInterval(ctx, m); err != nil {
ctx.ServerError("UpdatePushMirrorInterval", err)
return
@@ -342,7 +339,10 @@ func SettingsPost(ctx *context.Context) {
// If we observed its implementation in the context of `push-mirror-sync` where it
// is evident that pushing to the queue is necessary for updates.
// So, there are updates within the given interval, it is necessary to update the queue accordingly.
mirror_service.AddPushMirrorToQueue(m.ID)
if !ctx.FormBool("push_mirror_defer_sync") {
// push_mirror_defer_sync is mainly for testing purpose, we do not really want to sync the push mirror immediately
mirror_service.AddPushMirrorToQueue(m.ID)
}
ctx.Flash.Success(ctx.Tr("repo.settings.update_settings_success"))
ctx.Redirect(repo.Link() + "/settings")
@@ -356,18 +356,18 @@ func SettingsPost(ctx *context.Context) {
// as an error on the UI for this action
ctx.Data["Err_RepoName"] = nil
m, err := selectPushMirrorByForm(ctx, form, repo)
if err != nil {
m, _, _ := repo_model.GetPushMirrorByIDAndRepoID(ctx, form.PushMirrorID, repo.ID)
if m == nil {
ctx.NotFound("", nil)
return
}
if err = mirror_service.RemovePushMirrorRemote(ctx, m); err != nil {
if err := mirror_service.RemovePushMirrorRemote(ctx, m); err != nil {
ctx.ServerError("RemovePushMirrorRemote", err)
return
}
if err = repo_model.DeletePushMirrors(ctx, repo_model.PushMirrorOptions{ID: m.ID, RepoID: m.RepoID}); err != nil {
if err := repo_model.DeletePushMirrors(ctx, repo_model.PushMirrorOptions{ID: m.ID, RepoID: m.RepoID}); err != nil {
ctx.ServerError("DeletePushMirrorByID", err)
return
}
@@ -970,24 +970,3 @@ func handleSettingRemoteAddrError(ctx *context.Context, err error, form *forms.R
}
ctx.RenderWithErr(ctx.Tr("repo.mirror_address_url_invalid"), tplSettingsOptions, form)
}
func selectPushMirrorByForm(ctx *context.Context, form *forms.RepoSettingForm, repo *repo_model.Repository) (*repo_model.PushMirror, error) {
id, err := strconv.ParseInt(form.PushMirrorID, 10, 64)
if err != nil {
return nil, err
}
pushMirrors, _, err := repo_model.GetPushMirrorsByRepoID(ctx, repo.ID, db.ListOptions{})
if err != nil {
return nil, err
}
for _, m := range pushMirrors {
if m.ID == id {
m.Repo = repo
return m, nil
}
}
return nil, fmt.Errorf("PushMirror[%v] not associated to repository %v", id, repo)
}

View File

@@ -50,6 +50,7 @@ import (
"code.gitea.io/gitea/routers/web/feed"
"code.gitea.io/gitea/services/context"
issue_service "code.gitea.io/gitea/services/issue"
repo_service "code.gitea.io/gitea/services/repository"
files_service "code.gitea.io/gitea/services/repository/files"
"github.com/nektos/act/pkg/model"
@@ -1155,26 +1156,25 @@ func Forks(ctx *context.Context) {
if page <= 0 {
page = 1
}
pageSize := setting.ItemsPerPage
pager := context.NewPagination(ctx.Repo.Repository.NumForks, setting.ItemsPerPage, page, 5)
ctx.Data["Page"] = pager
forks, err := repo_model.GetForks(ctx, ctx.Repo.Repository, db.ListOptions{
Page: pager.Paginater.Current(),
PageSize: setting.ItemsPerPage,
forks, total, err := repo_service.FindForks(ctx, ctx.Repo.Repository, ctx.Doer, db.ListOptions{
Page: page,
PageSize: pageSize,
})
if err != nil {
ctx.ServerError("GetForks", err)
ctx.ServerError("FindForks", err)
return
}
for _, fork := range forks {
if err = fork.LoadOwner(ctx); err != nil {
ctx.ServerError("LoadOwner", err)
return
}
if err := repo_model.RepositoryList(forks).LoadOwners(ctx); err != nil {
ctx.ServerError("LoadAttributes", err)
return
}
pager := context.NewPagination(int(total), pageSize, page, 5)
ctx.Data["Page"] = pager
ctx.Data["Forks"] = forks
ctx.HTML(http.StatusOK, tplForks)

View File

@@ -6,6 +6,7 @@ package repo
import (
"bytes"
gocontext "context"
"fmt"
"io"
"net/http"
@@ -651,22 +652,32 @@ func WikiPages(ctx *context.Context) {
return
}
entries, err := commit.ListEntries()
treePath := "" // To support list sub folders' pages in the future
tree, err := commit.SubTree(treePath)
if err != nil {
ctx.ServerError("SubTree", err)
return
}
allEntries, err := tree.ListEntries()
if err != nil {
ctx.ServerError("ListEntries", err)
return
}
allEntries.CustomSort(base.NaturalSortLess)
entries, _, err := allEntries.GetCommitsInfo(gocontext.Context(ctx), commit, treePath)
if err != nil {
ctx.ServerError("GetCommitsInfo", err)
return
}
pages := make([]PageMeta, 0, len(entries))
for _, entry := range entries {
if !entry.IsRegular() {
if !entry.Entry.IsRegular() {
continue
}
c, err := wikiRepo.GetCommitByPath(entry.Name())
if err != nil {
ctx.ServerError("GetCommit", err)
return
}
wikiName, err := wiki_service.GitPathToWebPath(entry.Name())
wikiName, err := wiki_service.GitPathToWebPath(entry.Entry.Name())
if err != nil {
if repo_model.IsErrWikiInvalidFileName(err) {
continue
@@ -678,8 +689,8 @@ func WikiPages(ctx *context.Context) {
pages = append(pages, PageMeta{
Name: displayName,
SubURL: wiki_service.WebPathToURLPath(wikiName),
GitEntryName: entry.Name(),
UpdatedUnix: timeutil.TimeStamp(c.Author.When.Unix()),
GitEntryName: entry.Entry.Name(),
UpdatedUnix: timeutil.TimeStamp(entry.Commit.Author.When.Unix()),
})
}
ctx.Data["Pages"] = pages

View File

@@ -8,37 +8,24 @@ import (
"code.gitea.io/gitea/models/db"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/optional"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/services/context"
"code.gitea.io/gitea/services/convert"
)
// Search search users
func Search(ctx *context.Context) {
listOptions := db.ListOptions{
Page: ctx.FormInt("page"),
PageSize: convert.ToCorrectPageSize(ctx.FormInt("limit")),
}
users, maxResults, err := user_model.SearchUsers(ctx, &user_model.SearchUserOptions{
// SearchCandidates searches candidate users for dropdown list
func SearchCandidates(ctx *context.Context) {
users, _, err := user_model.SearchUsers(ctx, &user_model.SearchUserOptions{
Actor: ctx.Doer,
Keyword: ctx.FormTrim("q"),
UID: ctx.FormInt64("uid"),
Type: user_model.UserTypeIndividual,
IsActive: ctx.FormOptionalBool("active"),
ListOptions: listOptions,
IsActive: optional.Some(true),
ListOptions: db.ListOptions{PageSize: setting.UI.MembersPagingNum},
})
if err != nil {
ctx.JSON(http.StatusInternalServerError, map[string]any{
"ok": false,
"error": err.Error(),
})
ctx.ServerError("Unable to search users", err)
return
}
ctx.SetTotalCountHeader(maxResults)
ctx.JSON(http.StatusOK, map[string]any{
"ok": true,
"data": convert.ToUsers(ctx, ctx.Doer, users),
})
ctx.JSON(http.StatusOK, map[string]any{"data": convert.ToUsers(ctx, ctx.Doer, users)})
}

View File

@@ -33,8 +33,9 @@ func RegenerateScratchTwoFactor(ctx *context.Context) {
if auth.IsErrTwoFactorNotEnrolled(err) {
ctx.Flash.Error(ctx.Tr("settings.twofa_not_enrolled"))
ctx.Redirect(setting.AppSubURL + "/user/settings/security")
} else {
ctx.ServerError("SettingsTwoFactor: Failed to GetTwoFactorByUID", err)
}
ctx.ServerError("SettingsTwoFactor: Failed to GetTwoFactorByUID", err)
return
}
@@ -63,8 +64,9 @@ func DisableTwoFactor(ctx *context.Context) {
if auth.IsErrTwoFactorNotEnrolled(err) {
ctx.Flash.Error(ctx.Tr("settings.twofa_not_enrolled"))
ctx.Redirect(setting.AppSubURL + "/user/settings/security")
} else {
ctx.ServerError("SettingsTwoFactor: Failed to GetTwoFactorByUID", err)
}
ctx.ServerError("SettingsTwoFactor: Failed to GetTwoFactorByUID", err)
return
}
@@ -73,8 +75,9 @@ func DisableTwoFactor(ctx *context.Context) {
// There is a potential DB race here - we must have been disabled by another request in the intervening period
ctx.Flash.Success(ctx.Tr("settings.twofa_disabled"))
ctx.Redirect(setting.AppSubURL + "/user/settings/security")
} else {
ctx.ServerError("SettingsTwoFactor: Failed to DeleteTwoFactorByID", err)
}
ctx.ServerError("SettingsTwoFactor: Failed to DeleteTwoFactorByID", err)
return
}

View File

@@ -551,7 +551,7 @@ func registerRoutes(m *web.Route) {
m.Post("/authorize", web.Bind(forms.AuthorizationForm{}), auth.AuthorizeOAuth)
}, ignSignInAndCsrf, reqSignIn)
m.Methods("GET, OPTIONS", "/login/oauth/userinfo", optionsCorsHandler(), ignSignInAndCsrf, auth.InfoOAuth)
m.Methods("GET, POST, OPTIONS", "/login/oauth/userinfo", optionsCorsHandler(), ignSignInAndCsrf, auth.InfoOAuth)
m.Methods("POST, OPTIONS", "/login/oauth/access_token", optionsCorsHandler(), web.Bind(forms.AccessTokenForm{}), ignSignInAndCsrf, auth.AccessTokenOAuth)
m.Methods("GET, OPTIONS", "/login/oauth/keys", optionsCorsHandler(), ignSignInAndCsrf, auth.OIDCKeys)
m.Methods("POST, OPTIONS", "/login/oauth/introspect", optionsCorsHandler(), web.Bind(forms.IntrospectTokenForm{}), ignSignInAndCsrf, auth.IntrospectOAuth)
@@ -668,7 +668,7 @@ func registerRoutes(m *web.Route) {
m.Post("/forgot_password", auth.ForgotPasswdPost)
m.Post("/logout", auth.SignOut)
m.Get("/stopwatches", reqSignIn, user.GetStopwatches)
m.Get("/search", ignExploreSignIn, user.Search)
m.Get("/search_candidates", ignExploreSignIn, user.SearchCandidates)
m.Group("/oauth2", func() {
m.Get("/{provider}", auth.SignInOAuth)
m.Get("/{provider}/callback", auth.SignInOAuthCallback)
@@ -1454,6 +1454,35 @@ func registerRoutes(m *web.Route) {
)
// end "/{username}/{reponame}/activity"
m.Group("/{username}/{reponame}", func() {
m.Group("/pulls/{index}", func() {
m.Get("", repo.SetWhitespaceBehavior, repo.GetPullDiffStats, repo.ViewIssue)
m.Get(".diff", repo.DownloadPullDiff)
m.Get(".patch", repo.DownloadPullPatch)
m.Group("/commits", func() {
m.Get("", context.RepoRef(), repo.SetWhitespaceBehavior, repo.GetPullDiffStats, repo.ViewPullCommits)
m.Get("/list", context.RepoRef(), repo.GetPullCommits)
m.Get("/{sha:[a-f0-9]{7,40}}", context.RepoRef(), repo.SetEditorconfigIfExists, repo.SetDiffViewStyle, repo.SetWhitespaceBehavior, repo.SetShowOutdatedComments, repo.ViewPullFilesForSingleCommit)
})
m.Post("/merge", context.RepoMustNotBeArchived(), web.Bind(forms.MergePullRequestForm{}), repo.MergePullRequest)
m.Post("/cancel_auto_merge", context.RepoMustNotBeArchived(), repo.CancelAutoMergePullRequest)
m.Post("/update", repo.UpdatePullRequest)
m.Post("/set_allow_maintainer_edit", web.Bind(forms.UpdateAllowEditsForm{}), repo.SetAllowEdits)
m.Post("/cleanup", context.RepoMustNotBeArchived(), context.RepoRef(), repo.CleanUpPullRequest)
m.Group("/files", func() {
m.Get("", context.RepoRef(), repo.SetEditorconfigIfExists, repo.SetDiffViewStyle, repo.SetWhitespaceBehavior, repo.SetShowOutdatedComments, repo.ViewPullFilesForAllCommitsOfPr)
m.Get("/{sha:[a-f0-9]{7,40}}", context.RepoRef(), repo.SetEditorconfigIfExists, repo.SetDiffViewStyle, repo.SetWhitespaceBehavior, repo.SetShowOutdatedComments, repo.ViewPullFilesStartingFromCommit)
m.Get("/{shaFrom:[a-f0-9]{7,40}}..{shaTo:[a-f0-9]{7,40}}", context.RepoRef(), repo.SetEditorconfigIfExists, repo.SetDiffViewStyle, repo.SetWhitespaceBehavior, repo.SetShowOutdatedComments, repo.ViewPullFilesForRange)
m.Group("/reviews", func() {
m.Get("/new_comment", repo.RenderNewCodeCommentForm)
m.Post("/comments", web.Bind(forms.CodeCommentForm{}), repo.SetShowOutdatedComments, repo.CreateCodeComment)
m.Post("/submit", web.Bind(forms.SubmitReviewForm{}), repo.SubmitReview)
}, context.RepoMustNotBeArchived())
})
})
}, ignSignIn, context.RepoAssignment, repo.MustAllowPulls, reqRepoPullsReader)
// end "/{username}/{reponame}/pulls/{index}": repo pull request
m.Group("/{username}/{reponame}", func() {
m.Group("/activity_author_data", func() {
m.Get("", repo.ActivityAuthors)
@@ -1492,32 +1521,6 @@ func registerRoutes(m *web.Route) {
return cancel
})
m.Group("/pulls/{index}", func() {
m.Get("", repo.SetWhitespaceBehavior, repo.GetPullDiffStats, repo.ViewIssue)
m.Get(".diff", repo.DownloadPullDiff)
m.Get(".patch", repo.DownloadPullPatch)
m.Group("/commits", func() {
m.Get("", context.RepoRef(), repo.SetWhitespaceBehavior, repo.GetPullDiffStats, repo.ViewPullCommits)
m.Get("/list", context.RepoRef(), repo.GetPullCommits)
m.Get("/{sha:[a-f0-9]{7,40}}", context.RepoRef(), repo.SetEditorconfigIfExists, repo.SetDiffViewStyle, repo.SetWhitespaceBehavior, repo.SetShowOutdatedComments, repo.ViewPullFilesForSingleCommit)
})
m.Post("/merge", context.RepoMustNotBeArchived(), web.Bind(forms.MergePullRequestForm{}), repo.MergePullRequest)
m.Post("/cancel_auto_merge", context.RepoMustNotBeArchived(), repo.CancelAutoMergePullRequest)
m.Post("/update", repo.UpdatePullRequest)
m.Post("/set_allow_maintainer_edit", web.Bind(forms.UpdateAllowEditsForm{}), repo.SetAllowEdits)
m.Post("/cleanup", context.RepoMustNotBeArchived(), context.RepoRef(), repo.CleanUpPullRequest)
m.Group("/files", func() {
m.Get("", context.RepoRef(), repo.SetEditorconfigIfExists, repo.SetDiffViewStyle, repo.SetWhitespaceBehavior, repo.SetShowOutdatedComments, repo.ViewPullFilesForAllCommitsOfPr)
m.Get("/{sha:[a-f0-9]{7,40}}", context.RepoRef(), repo.SetEditorconfigIfExists, repo.SetDiffViewStyle, repo.SetWhitespaceBehavior, repo.SetShowOutdatedComments, repo.ViewPullFilesStartingFromCommit)
m.Get("/{shaFrom:[a-f0-9]{7,40}}..{shaTo:[a-f0-9]{7,40}}", context.RepoRef(), repo.SetEditorconfigIfExists, repo.SetDiffViewStyle, repo.SetWhitespaceBehavior, repo.SetShowOutdatedComments, repo.ViewPullFilesForRange)
m.Group("/reviews", func() {
m.Get("/new_comment", repo.RenderNewCodeCommentForm)
m.Post("/comments", web.Bind(forms.CodeCommentForm{}), repo.SetShowOutdatedComments, repo.CreateCodeComment)
m.Post("/submit", web.Bind(forms.SubmitReviewForm{}), repo.SubmitReview)
}, context.RepoMustNotBeArchived())
})
}, repo.MustAllowPulls)
m.Group("/media", func() {
m.Get("/branch/*", context.RepoRefByType(context.RepoRefBranch), repo.SingleDownloadOrLFS)
m.Get("/tag/*", context.RepoRefByType(context.RepoRefTag), repo.SingleDownloadOrLFS)

View File

@@ -83,7 +83,12 @@ func ParseAuthorizationToken(req *http.Request) (int64, error) {
return 0, fmt.Errorf("split token failed")
}
token, err := jwt.ParseWithClaims(parts[1], &actionsClaims{}, func(t *jwt.Token) (any, error) {
return TokenToTaskID(parts[1])
}
// TokenToTaskID returns the TaskID associated with the provided JWT token
func TokenToTaskID(token string) (int64, error) {
parsedToken, err := jwt.ParseWithClaims(token, &actionsClaims{}, func(t *jwt.Token) (any, error) {
if _, ok := t.Method.(*jwt.SigningMethodHMAC); !ok {
return nil, fmt.Errorf("unexpected signing method: %v", t.Header["alg"])
}
@@ -93,8 +98,8 @@ func ParseAuthorizationToken(req *http.Request) (int64, error) {
return 0, err
}
c, ok := token.Claims.(*actionsClaims)
if !token.Valid || !ok {
c, ok := parsedToken.Claims.(*actionsClaims)
if !parsedToken.Valid || !ok {
return 0, fmt.Errorf("invalid token claim")
}

View File

@@ -115,11 +115,20 @@ func (input *notifyInput) Notify(ctx context.Context) {
}
func notify(ctx context.Context, input *notifyInput) error {
shouldDetectSchedules := input.Event == webhook_module.HookEventPush && input.Ref.BranchName() == input.Repo.DefaultBranch
if input.Doer.IsActions() {
// avoiding triggering cyclically, for example:
// a comment of an issue will trigger the runner to add a new comment as reply,
// and the new comment will trigger the runner again.
log.Debug("ignore executing %v for event %v whose doer is %v", getMethod(ctx), input.Event, input.Doer.Name)
// we should update schedule tasks in this case, because
// 1. schedule tasks cannot be triggered by other events, so cyclic triggering will not occur
// 2. some schedule tasks may update the repo periodically, so the refs of schedule tasks need to be updated
if shouldDetectSchedules {
return DetectAndHandleSchedules(ctx, input.Repo)
}
return nil
}
if input.Repo.IsEmpty || input.Repo.IsArchived {
@@ -173,7 +182,6 @@ func notify(ctx context.Context, input *notifyInput) error {
var detectedWorkflows []*actions_module.DetectedWorkflow
actionsConfig := input.Repo.MustGetUnit(ctx, unit_model.TypeActions).ActionsConfig()
shouldDetectSchedules := input.Event == webhook_module.HookEventPush && input.Ref.BranchName() == input.Repo.DefaultBranch
workflows, schedules, err := actions_module.DetectWorkflows(gitRepo, commit,
input.Event,
input.Payload,

View File

@@ -5,6 +5,7 @@
package auth
import (
"errors"
"net/http"
"strings"
@@ -141,6 +142,15 @@ func (b *Basic) Verify(req *http.Request, w http.ResponseWriter, store DataStore
}
if skipper, ok := source.Cfg.(LocalTwoFASkipper); !ok || !skipper.IsSkipLocalTwoFA() {
// Check if the user has webAuthn registration
hasWebAuthn, err := auth_model.HasWebAuthnRegistrationsByUID(req.Context(), u.ID)
if err != nil {
return nil, err
}
if hasWebAuthn {
return nil, errors.New("Basic authorization is not allowed while webAuthn enrolled")
}
if err := validateTOTP(req, u); err != nil {
return nil, err
}

View File

@@ -17,6 +17,7 @@ import (
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/timeutil"
"code.gitea.io/gitea/modules/web/middleware"
"code.gitea.io/gitea/services/actions"
"code.gitea.io/gitea/services/auth/source/oauth2"
)
@@ -27,6 +28,9 @@ var (
// CheckOAuthAccessToken returns uid of user from oauth token
func CheckOAuthAccessToken(ctx context.Context, accessToken string) int64 {
if !setting.OAuth2.Enabled {
return 0
}
// JWT tokens require a "."
if !strings.Contains(accessToken, ".") {
return 0
@@ -49,6 +53,18 @@ func CheckOAuthAccessToken(ctx context.Context, accessToken string) int64 {
return grant.UserID
}
// CheckTaskIsRunning verifies that the TaskID corresponds to a running task
func CheckTaskIsRunning(ctx context.Context, taskID int64) bool {
// Verify the task exists
task, err := actions_model.GetTaskByID(ctx, taskID)
if err != nil {
return false
}
// Verify that it's running
return task.Status == actions_model.StatusRunning
}
// OAuth2 implements the Auth interface and authenticates requests
// (API requests only) by looking for an OAuth token in query parameters or the
// "Authorization" header.
@@ -92,6 +108,16 @@ func parseToken(req *http.Request) (string, bool) {
func (o *OAuth2) userIDFromToken(ctx context.Context, tokenSHA string, store DataStore) int64 {
// Let's see if token is valid.
if strings.Contains(tokenSHA, ".") {
// First attempt to decode an actions JWT, returning the actions user
if taskID, err := actions.TokenToTaskID(tokenSHA); err == nil {
if CheckTaskIsRunning(ctx, taskID) {
store.GetData()["IsActionsToken"] = true
store.GetData()["ActionsTaskID"] = taskID
return user_model.ActionsUserID
}
}
// Otherwise, check if this is an OAuth access token
uid := CheckOAuthAccessToken(ctx, tokenSHA)
if uid != 0 {
store.GetData()["IsApiToken"] = true

View File

@@ -0,0 +1,55 @@
// Copyright 2024 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package auth
import (
"context"
"testing"
"code.gitea.io/gitea/models/unittest"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/web/middleware"
"code.gitea.io/gitea/services/actions"
"github.com/stretchr/testify/assert"
)
func TestUserIDFromToken(t *testing.T) {
assert.NoError(t, unittest.PrepareTestDatabase())
t.Run("Actions JWT", func(t *testing.T) {
const RunningTaskID = 47
token, err := actions.CreateAuthorizationToken(RunningTaskID, 1, 2)
assert.NoError(t, err)
ds := make(middleware.ContextData)
o := OAuth2{}
uid := o.userIDFromToken(context.Background(), token, ds)
assert.Equal(t, int64(user_model.ActionsUserID), uid)
assert.Equal(t, ds["IsActionsToken"], true)
assert.Equal(t, ds["ActionsTaskID"], int64(RunningTaskID))
})
}
func TestCheckTaskIsRunning(t *testing.T) {
assert.NoError(t, unittest.PrepareTestDatabase())
cases := map[string]struct {
TaskID int64
Expected bool
}{
"Running": {TaskID: 47, Expected: true},
"Missing": {TaskID: 1, Expected: false},
"Cancelled": {TaskID: 46, Expected: false},
}
for name := range cases {
c := cases[name]
t.Run(name, func(t *testing.T) {
actual := CheckTaskIsRunning(context.Background(), c.TaskID)
assert.Equal(t, c.Expected, actual)
})
}
}

View File

@@ -394,14 +394,7 @@ func repoAssignment(ctx *Context, repo *repo_model.Repository) {
}
}
pushMirrors, _, err := repo_model.GetPushMirrorsByRepoID(ctx, repo.ID, db.ListOptions{})
if err != nil {
ctx.ServerError("GetPushMirrorsByRepoID", err)
return
}
ctx.Repo.Repository = repo
ctx.Data["PushMirrors"] = pushMirrors
ctx.Data["RepoName"] = ctx.Repo.Repository.Name
ctx.Data["IsEmptyRepo"] = ctx.Repo.Repository.IsEmpty
}

View File

@@ -0,0 +1,70 @@
// Copyright 2024 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package doctor
import (
"context"
"fmt"
"code.gitea.io/gitea/models/db"
repo_model "code.gitea.io/gitea/models/repo"
unit_model "code.gitea.io/gitea/models/unit"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/optional"
repo_service "code.gitea.io/gitea/services/repository"
)
func disableMirrorActionsUnit(ctx context.Context, logger log.Logger, autofix bool) error {
var reposToFix []*repo_model.Repository
for page := 1; ; page++ {
repos, _, err := repo_model.SearchRepository(ctx, &repo_model.SearchRepoOptions{
ListOptions: db.ListOptions{
PageSize: repo_model.RepositoryListDefaultPageSize,
Page: page,
},
Mirror: optional.Some(true),
})
if err != nil {
return fmt.Errorf("SearchRepository: %w", err)
}
if len(repos) == 0 {
break
}
for _, repo := range repos {
if repo.UnitEnabled(ctx, unit_model.TypeActions) {
reposToFix = append(reposToFix, repo)
}
}
}
if len(reposToFix) == 0 {
logger.Info("Found no mirror with actions unit enabled")
} else {
logger.Warn("Found %d mirrors with actions unit enabled", len(reposToFix))
}
if !autofix || len(reposToFix) == 0 {
return nil
}
for _, repo := range reposToFix {
if err := repo_service.UpdateRepositoryUnits(ctx, repo, nil, []unit_model.Type{unit_model.TypeActions}); err != nil {
return err
}
}
logger.Info("Fixed %d mirrors with actions unit enabled", len(reposToFix))
return nil
}
func init() {
Register(&Check{
Title: "Disable the actions unit for all mirrors",
Name: "disable-mirror-actions-unit",
IsDefault: false,
Run: disableMirrorActionsUnit,
Priority: 9,
})
}

Some files were not shown because too many files have changed in this diff Show More