- Add the ability to block a user via their profile page.
- This will unstar their repositories and visa versa.
- Blocked users cannot create issues or pull requests on your the doer's repositories (mind that this is not the case for organizations).
- Blocked users cannot comment on the doer's opened issues or pull requests.
- Blocked users cannot add reactions to doer's comments.
- Blocked users cannot cause a notification trough mentioning the doer.
Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/540
(cherry picked from commit 687d852480388897db4d7b0cb397cf7135ab97b1)
(cherry picked from commit 0c32a4fde531018f74e01d9db6520895fcfa10cc)
(cherry picked from commit 1791130e3cb8470b9b39742e0004d5e4c7d1e64d)
(cherry picked from commit 00f411819f62c02016d46602ab4daf49effe0550)
(cherry picked from commit e0c039b0e899e787a8df1efdd6b47388d93e08fa)
(cherry picked from commit b5a058ef0039e95be23893e6fefdcb62a7de071a)
(cherry picked from commit 5ff5460d28a482526da7e77bffb18d08de14aaaa)
(cherry picked from commit 97bc6e619d2970839b8692b7b025ff0ec1c96d12)
Before, Gitea shows the database table stats on the `admin dashboard`
page.
It has some problems:
* `count(*)` is quite heavy. If tables have many records, this blocks
loading the admin page blocks for a long time
* Some users had even reported issues that they can't visit their admin
page because this page causes blocking or `50x error (reverse proxy
timeout)`
* The `actions` stat is not useful. The table is simply too large. Does
it really matter if it contains 1,000,000 rows or 9,999,999 rows?
* The translation `admin.dashboard.statistic_info` is difficult to
maintain.
So, this PR uses a separate page to show the stats and removes the
`actions` stat.
![image](https://github.com/go-gitea/gitea/assets/2114189/babf7c61-b93b-4a62-bfaa-22983636427e)
## ⚠️ BREAKING
The `actions` Prometheus metrics collector has been removed for the
reasons mentioned beforehand.
Please do not rely on its output anymore.
This PR replaces all string refName as a type `git.RefName` to make the
code more maintainable.
Fix#15367
Replaces #23070
It also fixed a bug that tags are not sync because `git remote --prune
origin` will not remove local tags if remote removed.
We in fact should use `git fetch --prune --tags origin` but not `git
remote update origin` to do the sync.
Some answer from ChatGPT as ref.
> If the git fetch --prune --tags command is not working as expected,
there could be a few reasons why. Here are a few things to check:
>
>Make sure that you have the latest version of Git installed on your
system. You can check the version by running git --version in your
terminal. If you have an outdated version, try updating Git and see if
that resolves the issue.
>
>Check that your Git repository is properly configured to track the
remote repository's tags. You can check this by running git config
--get-all remote.origin.fetch and verifying that it includes
+refs/tags/*:refs/tags/*. If it does not, you can add it by running git
config --add remote.origin.fetch "+refs/tags/*:refs/tags/*".
>
>Verify that the tags you are trying to prune actually exist on the
remote repository. You can do this by running git ls-remote --tags
origin to list all the tags on the remote repository.
>
>Check if any local tags have been created that match the names of tags
on the remote repository. If so, these local tags may be preventing the
git fetch --prune --tags command from working properly. You can delete
local tags using the git tag -d command.
---------
Co-authored-by: delvh <dev.lh@web.de>
Fix#21324
In the current logic, if the `Actor` user is not an admin user, all
activities from private organizations won't be shown even if the `Actor`
user is a member of the organization.
As mentioned in the issue, when using deploy key to make a commit and
push, the activity's `act_user_id` will be the id of the organization so
the activity won't be shown to non-admin users because the visibility of
the organization is private.
55a5717760/models/activities/action.go (L490-L503)
This PR improves this logic so the activities of private organizations
can be shown.
Replace #23350.
Refactor `setting.Database.UseMySQL` to
`setting.Database.Type.IsMySQL()`.
To avoid mismatching between `Type` and `UseXXX`.
This refactor can fix the bug mentioned in #23350, so it should be
backported.
To avoid duplicated load of the same data in an HTTP request, we can set
a context cache to do that. i.e. Some pages may load a user from a
database with the same id in different areas on the same page. But the
code is hidden in two different deep logic. How should we share the
user? As a result of this PR, now if both entry functions accept
`context.Context` as the first parameter and we just need to refactor
`GetUserByID` to reuse the user from the context cache. Then it will not
be loaded twice on an HTTP request.
But of course, sometimes we would like to reload an object from the
database, that's why `RemoveContextData` is also exposed.
The core context cache is here. It defines a new context
```go
type cacheContext struct {
ctx context.Context
data map[any]map[any]any
lock sync.RWMutex
}
var cacheContextKey = struct{}{}
func WithCacheContext(ctx context.Context) context.Context {
return context.WithValue(ctx, cacheContextKey, &cacheContext{
ctx: ctx,
data: make(map[any]map[any]any),
})
}
```
Then you can use the below 4 methods to read/write/del the data within
the same context.
```go
func GetContextData(ctx context.Context, tp, key any) any
func SetContextData(ctx context.Context, tp, key, value any)
func RemoveContextData(ctx context.Context, tp, key any)
func GetWithContextCache[T any](ctx context.Context, cacheGroupKey string, cacheTargetID any, f func() (T, error)) (T, error)
```
Then let's take a look at how `system.GetString` implement it.
```go
func GetSetting(ctx context.Context, key string) (string, error) {
return cache.GetWithContextCache(ctx, contextCacheKey, key, func() (string, error) {
return cache.GetString(genSettingCacheKey(key), func() (string, error) {
res, err := GetSettingNoCache(ctx, key)
if err != nil {
return "", err
}
return res.SettingValue, nil
})
})
}
```
First, it will check if context data include the setting object with the
key. If not, it will query from the global cache which may be memory or
a Redis cache. If not, it will get the object from the database. In the
end, if the object gets from the global cache or database, it will be
set into the context cache.
An object stored in the context cache will only be destroyed after the
context disappeared.
partially fix#19345
This PR add some `Link` methods for different objects. The `Link`
methods are not different from `HTMLURL`, they are lack of the absolute
URL. And most of UI `HTMLURL` have been replaced to `Link` so that users
can visit them from a different domain or IP.
This PR also introduces a new javascript configuration
`window.config.reqAppUrl` which is different from `appUrl` which is
still an absolute url but the domain has been replaced to the current
requested domain.
- Currently the function `GetUsersWhoCanCreateOrgRepo` uses a query that
is able to have duplicated users in the result, this is can happen under
the condition that a user is in team that either is the owner team or
has permission to create organization repositories.
- Add test code to simulate the above condition for user 3,
[`TestGetUsersWhoCanCreateOrgRepo`](a1fcb1cfb8/models/organization/org_test.go (L435))
is the test function that tests for this.
- The fix is quite trivial use a map keyed by user id in order to drop
duplicates.
---------
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Change all license headers to comply with REUSE specification.
Fix#16132
Co-authored-by: flynnnnnnnnnn <flynnnnnnnnnn@github>
Co-authored-by: John Olheiser <john.olheiser@gmail.com>
This PR adds a context parameter to a bunch of methods. Some helper
`xxxCtx()` methods got replaced with the normal name now.
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Fix#19513
This PR introduce a new db method `InTransaction(context.Context)`,
and also builtin check on `db.TxContext` and `db.WithTx`.
There is also a new method `db.AutoTx` has been introduced but could be used by other PRs.
`WithTx` will always open a new transaction, if a transaction exist in context, return an error.
`AutoTx` will try to open a new transaction if no transaction exist in context.
That means it will always enter a transaction if there is no error.
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: 6543 <6543@obermui.de>
I found myself wondering whether a PR I scheduled for automerge was
actually merged. It was, but I didn't receive a mail notification for it
- that makes sense considering I am the doer and usually don't want to
receive such notifications. But ideally I want to receive a notification
when a PR was merged because I scheduled it for automerge.
This PR implements exactly that.
The implementation works, but I wonder if there's a way to avoid passing
the "This PR was automerged" state down so much. I tried solving this
via the database (checking if there's an automerge scheduled for this PR
when sending the notification) but that did not work reliably, probably
because sending the notification happens async and the entry might have
already been deleted. My implementation might be the most
straightforward but maybe not the most elegant.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
A lot of our code is repeatedly testing if individual errors are
specific types of Not Exist errors. This is repetitative and unnecesary.
`Unwrap() error` provides a common way of labelling an error as a
NotExist error and we can/should use this.
This PR has chosen to use the common `io/fs` errors e.g.
`fs.ErrNotExist` for our errors. This is in some ways not completely
correct as these are not filesystem errors but it seems like a
reasonable thing to do and would allow us to simplify a lot of our code
to `errors.Is(err, fs.ErrNotExist)` instead of
`package.IsErr...NotExist(err)`
I am open to suggestions to use a different base error - perhaps
`models/db.ErrNotExist` if that would be felt to be better.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: delvh <dev.lh@web.de>
When migrating add several more important sanity checks:
* SHAs must be SHAs
* Refs must be valid Refs
* URLs must be reasonable
Signed-off-by: Andrew Thornton <art27@cantab.net>
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: techknowlogick <matti@mdranta.net>
In #21031 we have discovered that on very big tables postgres will use a
search involving the sort term in preference to the restrictive index.
Therefore we add another index for postgres and update the original migration.
Fix#21031
Signed-off-by: Andrew Thornton <art27@cantab.net>