Compare commits
37 Commits
main
...
release/v1
Author | SHA1 | Date |
---|---|---|
Lauris BH | 86d61bbb10 | |
Lauris BH | bfe13a28c2 | |
Jonas Franz | ed27da4b0a | |
Ethan Koenig | 88c363f933 | |
Sandro Santilli | f33cd3c676 | |
Sandro Santilli | fecf9390ef | |
Lunny Xiao | bdf1856011 | |
Lunny Xiao | 6037f183cf | |
Lauris BH | 4b187f5167 | |
Lunny Xiao | aee45d072e | |
Lauris BH | e893aced89 | |
Lauris BH | d63ca66623 | |
Lauris BH | 59afb62ab2 | |
Ethan Koenig | 8a19c6b9a2 | |
Lunny Xiao | 81fd8c8fb6 | |
Lauris BH | fd7686171e | |
Ethan Koenig | ec6718ef40 | |
Ethan Koenig | 8f7054a864 | |
Ethan Koenig | 84352316a9 | |
Ethan Koenig | 237df2f339 | |
Lunny Xiao | b9abcb3b61 | |
Lunny Xiao | 9832b9509d | |
Lunny Xiao | 3d688bd2cc | |
Stephan Sachse | ce4a52c22c | |
Lauris BH | 974cffead9 | |
Lauris BH | 8cdd5a33cd | |
Lauris BH | 992ee21a29 | |
Lauris BH | 5ec9c45661 | |
Duncan Ogilvie | b5bdb0474a | |
techknowlogick | 751f01936b | |
Ethan Koenig | b16e5a42eb | |
Lauris BH | 460d0d9077 | |
Lunny Xiao | 9c07d909e6 | |
Ethan Koenig | 93d1ec4514 | |
Lauris BH | be41955407 | |
Lauris BH | c31e8777b7 | |
Lunny Xiao | 4683e540ad |
13
.drone.yml
13
.drone.yml
|
@ -86,6 +86,19 @@ pipeline:
|
|||
event: [ push, pull_request ]
|
||||
branch: [ master ]
|
||||
|
||||
test:
|
||||
image: webhippie/golang:edge
|
||||
pull: true
|
||||
group: test
|
||||
environment:
|
||||
TAGS: bindata sqlite
|
||||
GOPATH: /srv/app
|
||||
commands:
|
||||
- make test
|
||||
when:
|
||||
event: [ push, pull_request ]
|
||||
branch: [ release/* ]
|
||||
|
||||
test:
|
||||
image: webhippie/golang:edge
|
||||
pull: true
|
||||
|
|
256
CHANGELOG.md
256
CHANGELOG.md
|
@ -1,5 +1,261 @@
|
|||
# Changelog
|
||||
|
||||
## [1.3.3](https://github.com/go-gitea/gitea/releases/tag/v1.3.3) - 2018-02-15
|
||||
* SECURITY
|
||||
* Fix escaping changed title in comments (#3530) (#3535)
|
||||
* Escape search query display (#3486) (#3489)
|
||||
* BUGFIXES
|
||||
* Fix repo-transfer-and-team-repo-count bug (#3241) (#3244)
|
||||
* Open external tracker in blank window, consistently with wiki (#3227) (#3228)
|
||||
* Change SSL Mode from checkbox to string in admin page (#3208) (#3211)
|
||||
|
||||
## [1.3.2](https://github.com/go-gitea/gitea/releases/tag/v1.3.2) - 2017-12-14
|
||||
* BUGFIXES
|
||||
* fix run web with -p push failed (#3154) (#3179)
|
||||
* Fix source download link when no code unit allowed (#3166) (#3169)
|
||||
* Allow adding collaborators with (fullname) (#3103) (#3168)
|
||||
* Fix repo links (#3093) (#3163)
|
||||
* Fix Uninitialized variable in ParsePatch (#3156) (#3162)
|
||||
* Fix migration order v1.3 (#3157)
|
||||
* Fix avatar URLs (#3069) (#3143)
|
||||
|
||||
## [1.3.1](https://github.com/go-gitea/gitea/releases/tag/v1.3.1) - 2017-12-08
|
||||
* BUGFIXES
|
||||
* Sanitize logs for mirror sync (#3057, #3082) (#3078)
|
||||
* Fix missing branch in release bug (#3108) (#3117)
|
||||
* Fix repo indexer and submodule bug (#3107) (#3110)
|
||||
* Fix legacy URL redirects (#3100) (#3106)
|
||||
* Fix redis session failed (#3086) (#3089)
|
||||
* Fix issue list branch link broken (#3061) (#3070)
|
||||
* Fix missing password length check when change password (#3039) (#3071)
|
||||
|
||||
## [1.3.0](https://github.com/go-gitea/gitea/releases/tag/v1.3.0) - 2017-11-29
|
||||
* BREAKING
|
||||
* Make URL scheme unambiguous (#2408)
|
||||
* FEATURE
|
||||
* Add branch overiew page (#2108)
|
||||
* Code/repo search (#2582)
|
||||
* Add Activity page to repository (#2674)
|
||||
* Issue Timetracking (#2211)
|
||||
* Add orgmode document type on file view and readme (#2525)
|
||||
* Add external markup render support (#2570)
|
||||
* Implementation of discord webhook (#2402)
|
||||
* Webhooks for repo creation/deletion (#1663)
|
||||
* Complete push webhooks (#2530)
|
||||
* Add possibility to record branch information in an issue (#780)
|
||||
* Create new branch from branch selection dropdown (#2130)
|
||||
* Implementation of all repositories of a user from user->settings (#1740)
|
||||
* Add LFS object verification step after upload (#2868)
|
||||
* Configurable SSH cipher suite (#913)
|
||||
* Disable custom Git Hooks globally via configuration file (#2450)
|
||||
* Sync releases table with tags on push and for mirrors (#2459)
|
||||
* BUGFIXES
|
||||
* Fix label comments for French locale (#3017)
|
||||
* Remove duplicate "Max Diff Lines" from config view (#3001)
|
||||
* Fix over-escaped characters (#2992)
|
||||
* Fix go-get, src and raw urls to new scheme (#2986)
|
||||
* Fix error when add user has full name to team (#2975)
|
||||
* Fix files/commits of merged PRs (#2970)
|
||||
* Update golang x/crypto dependencies - Fix SSH transport fail (#2951)
|
||||
* Fix memcache support when value is returned as string always (#2950)
|
||||
* Fix issue link rendering in commit messages (#2897)
|
||||
* Fix adding a new authentication source after selecting OAuth (#2889)
|
||||
* Fix new branch creation to new url scheme (#2884)
|
||||
* Allow spaces in username for LDAP users (#2880)
|
||||
* Fix LFS not returning correct content length when requesting a range … (#2864)
|
||||
* Fix fork repository cycle to self (#2860)
|
||||
* Fix click create pull request button 404 (#2859)
|
||||
* Fix API raw file content access for default branch (#2849)
|
||||
* Clean repository ROOT directory name with filepath.Clean (#2846)
|
||||
* Fix API raw requests for commits and tags (#2841)
|
||||
* Fix order of comments (#2835)
|
||||
* Issue content should not be updated when closing with comment (#2833)
|
||||
* Fix ordering in app.ini and fix run mode option (#2829)
|
||||
* Fix redirect url of legacy commits route (#2825)
|
||||
* Fix commits page url (#2823)
|
||||
* Fix wrong translations (#2818)
|
||||
* Fix dropdown menu position when explore repos (#2808)
|
||||
* Fix Git LFS object/repo link storage in database and small refactoring (#2803)
|
||||
* Use relative URLs for avatars on the dashboard (#2800)
|
||||
* Add checks for commits with missing author and time (#2771)
|
||||
* Fix emojify image URL (#2769)
|
||||
* Hide unactive on explore users and some refactors (#2741)
|
||||
* Fix IE unsupported javascript construction in branch dropdown (#2736)
|
||||
* Only update mirror last update after successful sync (#2730)
|
||||
* Fix semantic-ui style conflict with v-cloak (#2722)
|
||||
* Fixing wrong translation on sort type oldest/latest (#2720)
|
||||
* Fix PR, milestone and label functionality if issue unit is disabled (#2710)
|
||||
* Fix plain readme didn't render correctly on repo home page (#2705)
|
||||
* Fix organization removal from watch table migration (#2703)
|
||||
* Fix repository search function (#2689)
|
||||
* fix panic on gogs webhook creation (#2675)
|
||||
* Fix orgnization user watch repository (#2670)
|
||||
* GPG key email verification no longer case sensitive (#2661) (#2663)
|
||||
* Fix index column deletion (#2651)
|
||||
* table `pull_request` wasn't updated correctly (#2649)
|
||||
* Fix go get response if only app URL is custom in configuration (#2634)
|
||||
* Fix doubled issue tab introduced in migration v16 (#2611)
|
||||
* Rewrite migrations to not depend on future code changes (#2604)
|
||||
* Fix implementation of repo Home func (#2601)
|
||||
* Fix translation upload to crowdin (#2599)
|
||||
* Reduce usage of allcols on update (#2596)
|
||||
* fix go get subpackage bug (#2584)
|
||||
* Fix broken migration to add can_push field back to table (#2574)
|
||||
* fix readme view bug (#2566)
|
||||
* Fix sending mail with a non-latin display name. #2102 (#2559)
|
||||
* Restricting access to fork functioanlity to users with Code access (#2534)
|
||||
* fix updated update on public key (#2514)
|
||||
* Added bucket name to s3 drone plugin (#2505)
|
||||
* fixes 500 error on dashboard when using MSSQL (#2504)
|
||||
* fix wrong rendering of commit detail page (#2503)
|
||||
* Hotfix: Add time manually adds time in nanoseconds (#2499)
|
||||
* Remove repository mirrors from "collaborative" list (#2497)
|
||||
* fix release failed since the wrong token name (#2496)
|
||||
* Fix slice out of bounds error in mailer (#2479)
|
||||
* Fix #2470 (#2477)
|
||||
* fix orgnization webhooks (#2422)
|
||||
* fix webhook test (#2415)
|
||||
* fix missing orgnization discord webhook (#2414)
|
||||
* Fix route handler order (#2409)
|
||||
* Prevent sending emails and notifications to inactive users (#2384)
|
||||
* Move themes to plugin directory. Fixes #2372 (#2375)
|
||||
* fix duplicated feed (#2370)
|
||||
* Fix missing collabrative repos (#2367)
|
||||
* Only check at least one email gpg key (#2266)
|
||||
* don't check minimum key size when disabled (#1754)
|
||||
* Fix run command race (#1470)
|
||||
* fix .netrc authentication (#2700)
|
||||
* Fix so that user can still fork his own repository to his organizations (#2699)
|
||||
* Fix can_push value to false in protected_branch (#2560)
|
||||
* Fix copy in email templates (#2801)
|
||||
* Fix inconsistencies in user settings UI (#2901)
|
||||
* Fix attachments icon size on zoom in/out (#2853)
|
||||
* Fix ignored errors in API route (#2850)
|
||||
* Fix activity css conflit with semantic ui (#2758)
|
||||
* Fix notifications tabs according to semantic-ui docs (#2733)
|
||||
* Fix typos in app.ini (#2732)
|
||||
* Fix duplicated rel attribute (#2549)
|
||||
* Fix tests code to prevent some runtime errors (#2381)
|
||||
* ENHANCEMENT
|
||||
* Memory usage improvements and lower minimal git requirement to 1.7.2 (#3013) (#3028)
|
||||
* Set OpenID support on by default when installing new instance (#3010) (#3027)
|
||||
* Use api.TrackedTime in API (#2807)
|
||||
* Configurable SSH key exchange algorithm and MAC suite (#2806)
|
||||
* Add Safari pinned tab icon (#2799)
|
||||
* Improve force push detect when push (#2798)
|
||||
* Add wrapping to long diff lines (#2789)
|
||||
* Link members and repositories count to each page on org home. (#2787)
|
||||
* Show Sendmail settings on admin config page (#2782)
|
||||
* Add commit count caching (#2774)
|
||||
* Use identicon image for default gravatar. (#2767)
|
||||
* Add default ssh ciphers (#2761)
|
||||
* Remove manual of unsupported option (#2757)
|
||||
* Add search mode option to /api/repo/search (#2756)
|
||||
* Move swagger-ui under /api/v1 (#2746)
|
||||
* Add support for extra sendmail arguments (#2731)
|
||||
* Use buffersize to reduce database connection when iterate (#2724)
|
||||
* Render plain text README.txt monospaced (#2721)
|
||||
* Integration test for activity page (#2704)
|
||||
* Merge password and 2fa page on user settings (#2695)
|
||||
* Allow custom SSH user in UI for built-in SSH server (#2617) (#2678)
|
||||
* Refactor duplicated code in repo handlers (#2657)
|
||||
* Replace deprecated Id method with ID (#2655)
|
||||
* Remove redudant functions and code (#2652)
|
||||
* hide navbar when only 1 sign-in method is available (#2444) (#2648)
|
||||
* Change default sort order (#2647)
|
||||
* Change pull description text (#2075) (#2646)
|
||||
* Remove direct user adding to organization members (#2641)
|
||||
* Use session when creating user (#2638)
|
||||
* Use Semantic UI's Search component for user and repo search (#2636)
|
||||
* Use AfterLoad instead of AfterSet on Structs (#2628)
|
||||
* Remove redudant CheckUnit calls in router (#2627)
|
||||
* Remove repo unit index (#2621)
|
||||
* Remove redudant issue LoadAttributes() calls (#2614)
|
||||
* Make indexer code more reusable (#2590)
|
||||
* Use custom type and constants to hold available order by options (#2572)
|
||||
* Use named ActionType constants in template helper (#2545)
|
||||
* Make basic functionality work without JavaScript (#2541)
|
||||
* Ctrl + Enter to submit forms (#2540)
|
||||
* Automatically regenerate indexer for incompatible versions (#2524)
|
||||
* Set default lfs content path to data/lfs (#2521)
|
||||
* Convert spaces to tabs in footer.tmpl (#2520)
|
||||
* Sort repository tree entries in natural way (#2506)
|
||||
* Open external wiki in new window (#2489)
|
||||
* Use created & updated instead BeforeInsert & BeforeUpdate (#2482)
|
||||
* Hide branch on pull request view or create UI (#2454)
|
||||
* improve protected branch to add whitelist support (#2451)
|
||||
* some refactors for issue and comments (#2419)
|
||||
* Restructure markup & markdown to prepare for multiple markup language… (#2411)
|
||||
* Improve issue search (#2387)
|
||||
* Add UseCompatSSHURI setting (#2356)
|
||||
* Use custom search for each filter type in dashboard (#2343)
|
||||
* Failed authentication are now properly logged (#2334)
|
||||
* Add environment variable support for Docker image (#2201)
|
||||
* Set session and indexers' data files rel to AppDataPath (#2192)
|
||||
* Display commit status on landing page of repo (#1784)
|
||||
* TESTING
|
||||
* Add integration test for logging out (#2892)
|
||||
* Integration test for user deleting account (#2891)
|
||||
* Use different directories for session files in integration tests (#2834)
|
||||
* Add deleted_branch table fixture (#2832)
|
||||
* Include HTTP method in test error message (#2815)
|
||||
* Add repository search unit and integration tests (#2575)
|
||||
* Expand fixtures (#2571)
|
||||
* Fix /api/repo/search integration tests (#2550)
|
||||
* Make integration tests more user-friendly (#2536)
|
||||
* Fix unit test race condition (#2516)
|
||||
* Add missing fixture to clean gpg_key table (#2494)
|
||||
* Hotfix for integration testing (#2473)
|
||||
* Make repo private to not interfere with other tests (#2467)
|
||||
* Error message for integration test (#2410)
|
||||
* Fix "index out of range" runtime error in repo_list tests (#2376)
|
||||
* Add git clone test on integration test (#1682)
|
||||
* TRANSLATION
|
||||
* Fix localization texts that contain semicolon (#2900)
|
||||
* Fix activity locale (#2709)
|
||||
* Update translation from crowdin (#2368)
|
||||
* BUILD
|
||||
* change the email and name to GitBot account. (#2848)
|
||||
* Fix removing backslash before quotes in translations (#2831)
|
||||
* add gitea remote in drone. (#2817)
|
||||
* add remote name for git push. (#2816)
|
||||
* Launch Gitea with custom UID/GID for 'git' user (fixes #2286) (#2791)
|
||||
* Download and pushing translations (#2727)
|
||||
* Automatic update of translations (#2585)
|
||||
* Add pre-build step for nodejs stuff (#2581)
|
||||
* Compress css with nodejs (#2580)
|
||||
* Remove go version check for make fmt (#2558)
|
||||
* Fix lint errors (#2547)
|
||||
* Always run fmt check in CI (#2546)
|
||||
* Fix fmt errors (#2544)
|
||||
* add codecov.io service. (#2493)
|
||||
* Fix some tests : make coverage -> test (#2492)
|
||||
* Fix fmt error in mailer (#2490)
|
||||
* Allow changing integration test database connection using env variables (#2484)
|
||||
* Add changelog config file for generate changelog (#2461)
|
||||
* Changes for latest DroneCI (#2362)
|
||||
* Use standard lessc and minify CSS using Node.js (#2337)
|
||||
* DOCS
|
||||
* Update screenshots on README (#2910)
|
||||
* Gogs -> Gitea (#2909)
|
||||
* Update swagger documentation (#2899)
|
||||
* Fix typo (#2810)
|
||||
* Fix Polish language name spelling (#2766)
|
||||
* Fix Various Grammar Issues and Adjust Unnatural Wording (#2737)
|
||||
* Add maintainer label for docker file (#2658)
|
||||
* Link to gitea-specific Vagrant example (#2624)
|
||||
* add release notes of v1.1.4 (#2463)
|
||||
* Wrap most paragraphs to 80 columns (#2396)
|
||||
* Update CONTRIBUTING following #2329 discussion (#2394)
|
||||
* Update hard-coded version to 1.3.0+dev (#2390)
|
||||
* Clarify Translation Process. Also fix branch names (#2378)
|
||||
* Admin grammar fixes and improvements (#2056)
|
||||
* MISC
|
||||
* Sync MaxGitDiffLineCharacters with conf/app.ini (#2779)
|
||||
* Dockerfile: Updated alpine image to 3.6. (#2486)
|
||||
* Basic VSCode configuration for building and debugging (#2483)
|
||||
* Added vendor dir for js/css libs; Documented sources (#1484) (#2241)
|
||||
|
||||
## [1.2.3](https://github.com/go-gitea/gitea/releases/tag/v1.2.3) - 2017-11-03
|
||||
* BUGFIXES
|
||||
* Only require one email when validating GPG key (#2266, #2467, #2663) (#2788)
|
||||
|
|
30
cmd/web.go
30
cmd/web.go
|
@ -19,8 +19,10 @@ import (
|
|||
"code.gitea.io/gitea/routers"
|
||||
"code.gitea.io/gitea/routers/routes"
|
||||
|
||||
"github.com/Unknwon/com"
|
||||
context2 "github.com/gorilla/context"
|
||||
"github.com/urfave/cli"
|
||||
ini "gopkg.in/ini.v1"
|
||||
)
|
||||
|
||||
// CmdWeb represents the available web sub-command.
|
||||
|
@ -69,6 +71,34 @@ func runWeb(ctx *cli.Context) error {
|
|||
if ctx.IsSet("port") {
|
||||
setting.AppURL = strings.Replace(setting.AppURL, setting.HTTPPort, ctx.String("port"), 1)
|
||||
setting.HTTPPort = ctx.String("port")
|
||||
|
||||
switch setting.Protocol {
|
||||
case setting.UnixSocket:
|
||||
case setting.FCGI:
|
||||
default:
|
||||
// Save LOCAL_ROOT_URL if port changed
|
||||
cfg := ini.Empty()
|
||||
if com.IsFile(setting.CustomConf) {
|
||||
// Keeps custom settings if there is already something.
|
||||
if err := cfg.Append(setting.CustomConf); err != nil {
|
||||
return fmt.Errorf("Failed to load custom conf '%s': %v", setting.CustomConf, err)
|
||||
}
|
||||
}
|
||||
|
||||
defaultLocalURL := string(setting.Protocol) + "://"
|
||||
if setting.HTTPAddr == "0.0.0.0" {
|
||||
defaultLocalURL += "localhost"
|
||||
} else {
|
||||
defaultLocalURL += setting.HTTPAddr
|
||||
}
|
||||
defaultLocalURL += ":" + setting.HTTPPort + "/"
|
||||
|
||||
cfg.Section("server").Key("LOCAL_ROOT_URL").SetValue(defaultLocalURL)
|
||||
|
||||
if err := cfg.SaveTo(setting.CustomConf); err != nil {
|
||||
return fmt.Errorf("Error saving generated JWT Secret to custom config: %v", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var listenAddr string
|
||||
|
|
|
@ -46,8 +46,10 @@ func TestRedirectsNoLogin(t *testing.T) {
|
|||
prepareTestEnv(t)
|
||||
|
||||
var redirects = map[string]string{
|
||||
"/user2/repo1/commits/master": "/user2/repo1/commits/branch/master",
|
||||
"/user2/repo1/src/master": "/user2/repo1/src/branch/master",
|
||||
"/user2/repo1/commits/master": "/user2/repo1/commits/branch/master",
|
||||
"/user2/repo1/src/master": "/user2/repo1/src/branch/master",
|
||||
"/user2/repo1/src/master/file.txt": "/user2/repo1/src/branch/master/file.txt",
|
||||
"/user2/repo1/src/master/directory/file.txt": "/user2/repo1/src/branch/master/directory/file.txt",
|
||||
}
|
||||
for link, redirectLink := range redirects {
|
||||
req := NewRequest(t, "GET", link)
|
||||
|
|
|
@ -13,7 +13,7 @@ import (
|
|||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func testPullCreate(t *testing.T, session *TestSession, user, repo, branch string) *TestResponse {
|
||||
func testPullCreate(t *testing.T, session *TestSession, user, repo, branch, title string) *TestResponse {
|
||||
req := NewRequest(t, "GET", path.Join(user, repo))
|
||||
resp := session.MakeRequest(t, req, http.StatusOK)
|
||||
|
||||
|
@ -34,7 +34,7 @@ func testPullCreate(t *testing.T, session *TestSession, user, repo, branch strin
|
|||
assert.True(t, exists, "The template has changed")
|
||||
req = NewRequestWithValues(t, "POST", link, map[string]string{
|
||||
"_csrf": htmlDoc.GetCSRF(),
|
||||
"title": "This is a pull title",
|
||||
"title": title,
|
||||
})
|
||||
resp = session.MakeRequest(t, req, http.StatusFound)
|
||||
|
||||
|
@ -48,5 +48,40 @@ func TestPullCreate(t *testing.T) {
|
|||
session := loginUser(t, "user1")
|
||||
testRepoFork(t, session, "user2", "repo1", "user1", "repo1")
|
||||
testEditFile(t, session, "user1", "repo1", "master", "README.md", "Hello, World (Edited)\n")
|
||||
testPullCreate(t, session, "user1", "repo1", "master")
|
||||
testPullCreate(t, session, "user1", "repo1", "master", "This is a pull title")
|
||||
}
|
||||
|
||||
func TestPullCreate_TitleEscape(t *testing.T) {
|
||||
prepareTestEnv(t)
|
||||
session := loginUser(t, "user1")
|
||||
testRepoFork(t, session, "user2", "repo1", "user1", "repo1")
|
||||
testEditFile(t, session, "user1", "repo1", "master", "README.md", "Hello, World (Edited)\n")
|
||||
resp := testPullCreate(t, session, "user1", "repo1", "master", "<i>XSS PR</i>")
|
||||
|
||||
// check the redirected URL
|
||||
url := RedirectURL(t, resp)
|
||||
assert.Regexp(t, "^/user2/repo1/pulls/[0-9]*$", url)
|
||||
|
||||
// Edit title
|
||||
req := NewRequest(t, "GET", url)
|
||||
resp = session.MakeRequest(t, req, http.StatusOK)
|
||||
htmlDoc := NewHTMLParser(t, resp.Body)
|
||||
editTestTitleURL, exists := htmlDoc.doc.Find("#save-edit-title").First().Attr("data-update-url")
|
||||
assert.True(t, exists, "The template has changed")
|
||||
|
||||
req = NewRequestWithValues(t, "POST", editTestTitleURL, map[string]string{
|
||||
"_csrf": htmlDoc.GetCSRF(),
|
||||
"title": "<u>XSS PR</u>",
|
||||
})
|
||||
session.MakeRequest(t, req, http.StatusOK)
|
||||
|
||||
req = NewRequest(t, "GET", url)
|
||||
resp = session.MakeRequest(t, req, http.StatusOK)
|
||||
htmlDoc = NewHTMLParser(t, resp.Body)
|
||||
titleHTML, err := htmlDoc.doc.Find(".comments .event .text b").First().Html()
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "<i>XSS PR</i>", titleHTML)
|
||||
titleHTML, err = htmlDoc.doc.Find(".comments .event .text b").Next().Html()
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "<u>XSS PR</u>", titleHTML)
|
||||
}
|
||||
|
|
|
@ -51,7 +51,7 @@ func TestPullMerge(t *testing.T) {
|
|||
testRepoFork(t, session, "user2", "repo1", "user1", "repo1")
|
||||
testEditFile(t, session, "user1", "repo1", "master", "README.md", "Hello, World (Edited)\n")
|
||||
|
||||
resp := testPullCreate(t, session, "user1", "repo1", "master")
|
||||
resp := testPullCreate(t, session, "user1", "repo1", "master", "This is a pull title")
|
||||
|
||||
elem := strings.Split(RedirectURL(t, resp), "/")
|
||||
assert.EqualValues(t, "pulls", elem[3])
|
||||
|
@ -64,7 +64,7 @@ func TestPullCleanUpAfterMerge(t *testing.T) {
|
|||
testRepoFork(t, session, "user2", "repo1", "user1", "repo1")
|
||||
testEditFileToNewBranch(t, session, "user1", "repo1", "master", "feature/test", "README.md", "Hello, World (Edited)\n")
|
||||
|
||||
resp := testPullCreate(t, session, "user1", "repo1", "feature/test")
|
||||
resp := testPullCreate(t, session, "user1", "repo1", "feature/test", "This is a pull title")
|
||||
|
||||
elem := strings.Split(RedirectURL(t, resp), "/")
|
||||
assert.EqualValues(t, "pulls", elem[3])
|
||||
|
|
|
@ -19,16 +19,16 @@ func TestRepoActivity(t *testing.T) {
|
|||
// Create PRs (1 merged & 2 proposed)
|
||||
testRepoFork(t, session, "user2", "repo1", "user1", "repo1")
|
||||
testEditFile(t, session, "user1", "repo1", "master", "README.md", "Hello, World (Edited)\n")
|
||||
resp := testPullCreate(t, session, "user1", "repo1", "master")
|
||||
resp := testPullCreate(t, session, "user1", "repo1", "master", "This is a pull title")
|
||||
elem := strings.Split(RedirectURL(t, resp), "/")
|
||||
assert.EqualValues(t, "pulls", elem[3])
|
||||
testPullMerge(t, session, elem[1], elem[2], elem[4])
|
||||
|
||||
testEditFileToNewBranch(t, session, "user1", "repo1", "master", "feat/better_readme", "README.md", "Hello, World (Edited Again)\n")
|
||||
testPullCreate(t, session, "user1", "repo1", "feat/better_readme")
|
||||
testPullCreate(t, session, "user1", "repo1", "feat/better_readme", "This is a pull title")
|
||||
|
||||
testEditFileToNewBranch(t, session, "user1", "repo1", "master", "feat/much_better_readme", "README.md", "Hello, World (Edited More)\n")
|
||||
testPullCreate(t, session, "user1", "repo1", "feat/much_better_readme")
|
||||
testPullCreate(t, session, "user1", "repo1", "feat/much_better_readme", "This is a pull title")
|
||||
|
||||
// Create issues (3 new issues)
|
||||
testNewIssue(t, session, "user2", "repo1", "Issue 1", "Description 1")
|
||||
|
|
|
@ -238,7 +238,7 @@ func ParsePatch(maxLines, maxLineCharacters, maxFiles int, reader io.Reader) (*D
|
|||
var (
|
||||
diff = &Diff{Files: make([]*DiffFile, 0)}
|
||||
|
||||
curFile *DiffFile
|
||||
curFile = &DiffFile{}
|
||||
curSection = &DiffSection{
|
||||
Lines: make([]*DiffLine, 0, 10),
|
||||
}
|
||||
|
|
|
@ -59,6 +59,10 @@ type Version struct {
|
|||
Version int64
|
||||
}
|
||||
|
||||
func emptyMigration(x *xorm.Engine) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// This is a sequence of migrations. Add new migrations to the bottom of the list.
|
||||
// If you want to "retire" a migration, remove it from the top of the list and
|
||||
// update minDBVersion accordingly
|
||||
|
@ -127,17 +131,17 @@ var migrations = []Migration{
|
|||
// v38 -> v39
|
||||
NewMigration("remove commits and settings unit types", removeCommitsUnitType),
|
||||
// v39 -> v40
|
||||
NewMigration("adds time tracking and stopwatches", addTimetracking),
|
||||
// v40 -> v41
|
||||
NewMigration("migrate protected branch struct", migrateProtectedBranchStruct),
|
||||
// v41 -> v42
|
||||
NewMigration("add default value to user prohibit_login", addDefaultValueToUserProhibitLogin),
|
||||
// v42 -> v43
|
||||
NewMigration("add tags to releases and sync existing repositories", releaseAddColumnIsTagAndSyncTags),
|
||||
// v43 -> v44
|
||||
NewMigration("fix protected branch can push value to false", fixProtectedBranchCanPushValue),
|
||||
// v44 -> v45
|
||||
// v40 -> v41
|
||||
NewMigration("add tags to releases and sync existing repositories", releaseAddColumnIsTagAndSyncTags),
|
||||
// v41 -> v42
|
||||
NewMigration("remove duplicate unit types", removeDuplicateUnitTypes),
|
||||
// v42 -> v43
|
||||
NewMigration("empty step", emptyMigration),
|
||||
// v43 -> v44
|
||||
NewMigration("empty step", emptyMigration),
|
||||
// v44 -> v45
|
||||
NewMigration("empty step", emptyMigration),
|
||||
// v45 -> v46
|
||||
NewMigration("remove index column from repo_unit table", removeIndexColumnFromRepoUnitTable),
|
||||
// v46 -> v47
|
||||
|
@ -146,6 +150,12 @@ var migrations = []Migration{
|
|||
NewMigration("add deleted branches", addDeletedBranch),
|
||||
// v48 -> v49
|
||||
NewMigration("add repo indexer status", addRepoIndexerStatus),
|
||||
// v49 -> v50
|
||||
NewMigration("adds time tracking and stopwatches", addTimetracking),
|
||||
// v50 -> v51
|
||||
NewMigration("migrate protected branch struct", migrateProtectedBranchStruct),
|
||||
// v51 -> v52
|
||||
NewMigration("add default value to user prohibit_login", addDefaultValueToUserProhibitLogin),
|
||||
}
|
||||
|
||||
// Migrate database to current version
|
||||
|
|
|
@ -6,69 +6,21 @@ package migrations
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"code.gitea.io/gitea/modules/setting"
|
||||
|
||||
"github.com/go-xorm/xorm"
|
||||
)
|
||||
|
||||
func addTimetracking(x *xorm.Engine) error {
|
||||
// RepoUnit describes all units of a repository
|
||||
type RepoUnit struct {
|
||||
ID int64
|
||||
RepoID int64 `xorm:"INDEX(s)"`
|
||||
Type int `xorm:"INDEX(s)"`
|
||||
Index int
|
||||
Config map[string]interface{} `xorm:"JSON"`
|
||||
CreatedUnix int64 `xorm:"INDEX CREATED"`
|
||||
Created time.Time `xorm:"-"`
|
||||
func fixProtectedBranchCanPushValue(x *xorm.Engine) error {
|
||||
type ProtectedBranch struct {
|
||||
CanPush bool `xorm:"NOT NULL DEFAULT false"`
|
||||
}
|
||||
|
||||
// Stopwatch see models/issue_stopwatch.go
|
||||
type Stopwatch struct {
|
||||
ID int64 `xorm:"pk autoincr"`
|
||||
IssueID int64 `xorm:"INDEX"`
|
||||
UserID int64 `xorm:"INDEX"`
|
||||
Created time.Time `xorm:"-"`
|
||||
CreatedUnix int64
|
||||
}
|
||||
|
||||
// TrackedTime see models/issue_tracked_time.go
|
||||
type TrackedTime struct {
|
||||
ID int64 `xorm:"pk autoincr" json:"id"`
|
||||
IssueID int64 `xorm:"INDEX" json:"issue_id"`
|
||||
UserID int64 `xorm:"INDEX" json:"user_id"`
|
||||
Created time.Time `xorm:"-" json:"created"`
|
||||
CreatedUnix int64 `json:"-"`
|
||||
Time int64 `json:"time"`
|
||||
}
|
||||
|
||||
if err := x.Sync2(new(Stopwatch)); err != nil {
|
||||
if err := x.Sync2(new(ProtectedBranch)); err != nil {
|
||||
return fmt.Errorf("Sync2: %v", err)
|
||||
}
|
||||
if err := x.Sync2(new(TrackedTime)); err != nil {
|
||||
return fmt.Errorf("Sync2: %v", err)
|
||||
}
|
||||
//Updating existing issue units
|
||||
units := make([]*RepoUnit, 0, 100)
|
||||
err := x.Where("`type` = ?", V16UnitTypeIssues).Find(&units)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Query repo units: %v", err)
|
||||
}
|
||||
for _, unit := range units {
|
||||
if unit.Config == nil {
|
||||
unit.Config = make(map[string]interface{})
|
||||
}
|
||||
if _, ok := unit.Config["EnableTimetracker"]; !ok {
|
||||
unit.Config["EnableTimetracker"] = setting.Service.DefaultEnableTimetracking
|
||||
}
|
||||
if _, ok := unit.Config["AllowOnlyContributorsToTrackTime"]; !ok {
|
||||
unit.Config["AllowOnlyContributorsToTrackTime"] = setting.Service.DefaultAllowOnlyContributorsToTrackTime
|
||||
}
|
||||
if _, err := x.ID(unit.ID).Cols("config").Update(unit); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
|
||||
_, err := x.Cols("can_push").Update(&ProtectedBranch{
|
||||
CanPush: false,
|
||||
})
|
||||
return err
|
||||
}
|
||||
|
|
|
@ -6,50 +6,52 @@ package migrations
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"code.gitea.io/git"
|
||||
"code.gitea.io/gitea/models"
|
||||
"code.gitea.io/gitea/modules/log"
|
||||
"code.gitea.io/gitea/modules/setting"
|
||||
|
||||
"github.com/go-xorm/xorm"
|
||||
)
|
||||
|
||||
func migrateProtectedBranchStruct(x *xorm.Engine) error {
|
||||
type ProtectedBranch struct {
|
||||
ID int64 `xorm:"pk autoincr"`
|
||||
RepoID int64 `xorm:"UNIQUE(s)"`
|
||||
BranchName string `xorm:"UNIQUE(s)"`
|
||||
CanPush bool
|
||||
Created time.Time `xorm:"-"`
|
||||
CreatedUnix int64
|
||||
Updated time.Time `xorm:"-"`
|
||||
UpdatedUnix int64
|
||||
// ReleaseV39 describes the added field for Release
|
||||
type ReleaseV39 struct {
|
||||
IsTag bool `xorm:"NOT NULL DEFAULT false"`
|
||||
}
|
||||
|
||||
// TableName will be invoked by XORM to customrize the table name
|
||||
func (*ReleaseV39) TableName() string {
|
||||
return "release"
|
||||
}
|
||||
|
||||
func releaseAddColumnIsTagAndSyncTags(x *xorm.Engine) error {
|
||||
if err := x.Sync2(new(ReleaseV39)); err != nil {
|
||||
return fmt.Errorf("Sync2: %v", err)
|
||||
}
|
||||
|
||||
var pbs []ProtectedBranch
|
||||
err := x.Find(&pbs)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// For the sake of SQLite3, we can't use x.Iterate here.
|
||||
offset := 0
|
||||
pageSize := 20
|
||||
for {
|
||||
repos := make([]*models.Repository, 0, pageSize)
|
||||
if err := x.Table("repository").Asc("id").Limit(pageSize, offset).Find(&repos); err != nil {
|
||||
return fmt.Errorf("select repos [offset: %d]: %v", offset, err)
|
||||
}
|
||||
for _, repo := range repos {
|
||||
gitRepo, err := git.OpenRepository(repo.RepoPath())
|
||||
if err != nil {
|
||||
log.Warn("OpenRepository: %v", err)
|
||||
continue
|
||||
}
|
||||
|
||||
for _, pb := range pbs {
|
||||
if pb.CanPush {
|
||||
if _, err = x.ID(pb.ID).Delete(new(ProtectedBranch)); err != nil {
|
||||
return err
|
||||
if err = models.SyncReleasesWithTags(repo, gitRepo); err != nil {
|
||||
log.Warn("SyncReleasesWithTags: %v", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
switch {
|
||||
case setting.UseSQLite3:
|
||||
log.Warn("Unable to drop columns in SQLite")
|
||||
case setting.UseMySQL, setting.UsePostgreSQL, setting.UseMSSQL, setting.UseTiDB:
|
||||
if _, err := x.Exec("ALTER TABLE protected_branch DROP COLUMN can_push"); err != nil {
|
||||
return fmt.Errorf("DROP COLUMN can_push: %v", err)
|
||||
if len(repos) < pageSize {
|
||||
break
|
||||
}
|
||||
default:
|
||||
log.Fatal(4, "Unrecognized DB")
|
||||
offset += pageSize
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -7,36 +7,63 @@ package migrations
|
|||
import (
|
||||
"fmt"
|
||||
|
||||
"code.gitea.io/gitea/models"
|
||||
|
||||
"github.com/go-xorm/xorm"
|
||||
)
|
||||
|
||||
func addDefaultValueToUserProhibitLogin(x *xorm.Engine) (err error) {
|
||||
user := &models.User{
|
||||
ProhibitLogin: false,
|
||||
func removeDuplicateUnitTypes(x *xorm.Engine) error {
|
||||
// RepoUnit describes all units of a repository
|
||||
type RepoUnit struct {
|
||||
RepoID int64
|
||||
Type int
|
||||
}
|
||||
|
||||
if _, err := x.Where("`prohibit_login` IS NULL").Cols("prohibit_login").Update(user); err != nil {
|
||||
// Enumerate all the unit types
|
||||
const (
|
||||
UnitTypeCode = iota + 1 // 1 code
|
||||
UnitTypeIssues // 2 issues
|
||||
UnitTypePullRequests // 3 PRs
|
||||
UnitTypeReleases // 4 Releases
|
||||
UnitTypeWiki // 5 Wiki
|
||||
UnitTypeExternalWiki // 6 ExternalWiki
|
||||
UnitTypeExternalTracker // 7 ExternalTracker
|
||||
)
|
||||
|
||||
var externalIssueRepoUnits []RepoUnit
|
||||
err := x.Where("type = ?", UnitTypeExternalTracker).Find(&externalIssueRepoUnits)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Query repositories: %v", err)
|
||||
}
|
||||
|
||||
var externalWikiRepoUnits []RepoUnit
|
||||
err = x.Where("type = ?", UnitTypeExternalWiki).Find(&externalWikiRepoUnits)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Query repositories: %v", err)
|
||||
}
|
||||
|
||||
sess := x.NewSession()
|
||||
defer sess.Close()
|
||||
|
||||
if err := sess.Begin(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
dialect := x.Dialect().DriverName()
|
||||
|
||||
switch dialect {
|
||||
case "mysql":
|
||||
_, err = x.Exec("ALTER TABLE user MODIFY `prohibit_login` tinyint(1) NOT NULL DEFAULT 0")
|
||||
case "postgres":
|
||||
_, err = x.Exec("ALTER TABLE \"user\" ALTER COLUMN `prohibit_login` SET NOT NULL, ALTER COLUMN `prohibit_login` SET DEFAULT false")
|
||||
case "mssql":
|
||||
// xorm already set DEFAULT 0 for data type BIT in mssql
|
||||
_, err = x.Exec(`ALTER TABLE [user] ALTER COLUMN "prohibit_login" BIT NOT NULL`)
|
||||
case "sqlite3":
|
||||
for _, repoUnit := range externalIssueRepoUnits {
|
||||
if _, err = sess.Delete(&RepoUnit{
|
||||
RepoID: repoUnit.RepoID,
|
||||
Type: UnitTypeIssues,
|
||||
}); err != nil {
|
||||
return fmt.Errorf("Delete repo unit: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error changing user prohibit_login column definition: %v", err)
|
||||
for _, repoUnit := range externalWikiRepoUnits {
|
||||
if _, err = sess.Delete(&RepoUnit{
|
||||
RepoID: repoUnit.RepoID,
|
||||
Type: UnitTypeWiki,
|
||||
}); err != nil {
|
||||
return fmt.Errorf("Delete repo unit: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
return err
|
||||
return sess.Commit()
|
||||
}
|
||||
|
|
|
@ -1,57 +0,0 @@
|
|||
// Copyright 2017 The Gitea Authors. All rights reserved.
|
||||
// Use of this source code is governed by a MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package migrations
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"code.gitea.io/git"
|
||||
"code.gitea.io/gitea/models"
|
||||
"code.gitea.io/gitea/modules/log"
|
||||
|
||||
"github.com/go-xorm/xorm"
|
||||
)
|
||||
|
||||
// ReleaseV39 describes the added field for Release
|
||||
type ReleaseV39 struct {
|
||||
IsTag bool `xorm:"NOT NULL DEFAULT false"`
|
||||
}
|
||||
|
||||
// TableName will be invoked by XORM to customrize the table name
|
||||
func (*ReleaseV39) TableName() string {
|
||||
return "release"
|
||||
}
|
||||
|
||||
func releaseAddColumnIsTagAndSyncTags(x *xorm.Engine) error {
|
||||
if err := x.Sync2(new(ReleaseV39)); err != nil {
|
||||
return fmt.Errorf("Sync2: %v", err)
|
||||
}
|
||||
|
||||
// For the sake of SQLite3, we can't use x.Iterate here.
|
||||
offset := 0
|
||||
pageSize := 20
|
||||
for {
|
||||
repos := make([]*models.Repository, 0, pageSize)
|
||||
if err := x.Table("repository").Asc("id").Limit(pageSize, offset).Find(&repos); err != nil {
|
||||
return fmt.Errorf("select repos [offset: %d]: %v", offset, err)
|
||||
}
|
||||
for _, repo := range repos {
|
||||
gitRepo, err := git.OpenRepository(repo.RepoPath())
|
||||
if err != nil {
|
||||
log.Warn("OpenRepository: %v", err)
|
||||
continue
|
||||
}
|
||||
|
||||
if err = models.SyncReleasesWithTags(repo, gitRepo); err != nil {
|
||||
log.Warn("SyncReleasesWithTags: %v", err)
|
||||
}
|
||||
}
|
||||
if len(repos) < pageSize {
|
||||
break
|
||||
}
|
||||
offset += pageSize
|
||||
}
|
||||
return nil
|
||||
}
|
|
@ -1,26 +0,0 @@
|
|||
// Copyright 2017 The Gitea Authors. All rights reserved.
|
||||
// Use of this source code is governed by a MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package migrations
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/go-xorm/xorm"
|
||||
)
|
||||
|
||||
func fixProtectedBranchCanPushValue(x *xorm.Engine) error {
|
||||
type ProtectedBranch struct {
|
||||
CanPush bool `xorm:"NOT NULL DEFAULT false"`
|
||||
}
|
||||
|
||||
if err := x.Sync2(new(ProtectedBranch)); err != nil {
|
||||
return fmt.Errorf("Sync2: %v", err)
|
||||
}
|
||||
|
||||
_, err := x.Cols("can_push").Update(&ProtectedBranch{
|
||||
CanPush: false,
|
||||
})
|
||||
return err
|
||||
}
|
|
@ -1,69 +0,0 @@
|
|||
// Copyright 2017 The Gitea Authors. All rights reserved.
|
||||
// Use of this source code is governed by a MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package migrations
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/go-xorm/xorm"
|
||||
)
|
||||
|
||||
func removeDuplicateUnitTypes(x *xorm.Engine) error {
|
||||
// RepoUnit describes all units of a repository
|
||||
type RepoUnit struct {
|
||||
RepoID int64
|
||||
Type int
|
||||
}
|
||||
|
||||
// Enumerate all the unit types
|
||||
const (
|
||||
UnitTypeCode = iota + 1 // 1 code
|
||||
UnitTypeIssues // 2 issues
|
||||
UnitTypePullRequests // 3 PRs
|
||||
UnitTypeReleases // 4 Releases
|
||||
UnitTypeWiki // 5 Wiki
|
||||
UnitTypeExternalWiki // 6 ExternalWiki
|
||||
UnitTypeExternalTracker // 7 ExternalTracker
|
||||
)
|
||||
|
||||
var externalIssueRepoUnits []RepoUnit
|
||||
err := x.Where("type = ?", UnitTypeExternalTracker).Find(&externalIssueRepoUnits)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Query repositories: %v", err)
|
||||
}
|
||||
|
||||
var externalWikiRepoUnits []RepoUnit
|
||||
err = x.Where("type = ?", UnitTypeExternalWiki).Find(&externalWikiRepoUnits)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Query repositories: %v", err)
|
||||
}
|
||||
|
||||
sess := x.NewSession()
|
||||
defer sess.Close()
|
||||
|
||||
if err := sess.Begin(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, repoUnit := range externalIssueRepoUnits {
|
||||
if _, err = sess.Delete(&RepoUnit{
|
||||
RepoID: repoUnit.RepoID,
|
||||
Type: UnitTypeIssues,
|
||||
}); err != nil {
|
||||
return fmt.Errorf("Delete repo unit: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
for _, repoUnit := range externalWikiRepoUnits {
|
||||
if _, err = sess.Delete(&RepoUnit{
|
||||
RepoID: repoUnit.RepoID,
|
||||
Type: UnitTypeWiki,
|
||||
}); err != nil {
|
||||
return fmt.Errorf("Delete repo unit: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
return sess.Commit()
|
||||
}
|
|
@ -0,0 +1,73 @@
|
|||
// Copyright 2017 The Gitea Authors. All rights reserved.
|
||||
// Use of this source code is governed by a MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package migrations
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"code.gitea.io/gitea/modules/setting"
|
||||
|
||||
"github.com/go-xorm/xorm"
|
||||
)
|
||||
|
||||
func addTimetracking(x *xorm.Engine) error {
|
||||
// RepoUnit describes all units of a repository
|
||||
type RepoUnit struct {
|
||||
ID int64
|
||||
RepoID int64 `xorm:"INDEX(s)"`
|
||||
Type int `xorm:"INDEX(s)"`
|
||||
Config map[string]interface{} `xorm:"JSON"`
|
||||
CreatedUnix int64 `xorm:"INDEX CREATED"`
|
||||
Created time.Time `xorm:"-"`
|
||||
}
|
||||
|
||||
// Stopwatch see models/issue_stopwatch.go
|
||||
type Stopwatch struct {
|
||||
ID int64 `xorm:"pk autoincr"`
|
||||
IssueID int64 `xorm:"INDEX"`
|
||||
UserID int64 `xorm:"INDEX"`
|
||||
Created time.Time `xorm:"-"`
|
||||
CreatedUnix int64
|
||||
}
|
||||
|
||||
// TrackedTime see models/issue_tracked_time.go
|
||||
type TrackedTime struct {
|
||||
ID int64 `xorm:"pk autoincr" json:"id"`
|
||||
IssueID int64 `xorm:"INDEX" json:"issue_id"`
|
||||
UserID int64 `xorm:"INDEX" json:"user_id"`
|
||||
Created time.Time `xorm:"-" json:"created"`
|
||||
CreatedUnix int64 `json:"-"`
|
||||
Time int64 `json:"time"`
|
||||
}
|
||||
|
||||
if err := x.Sync2(new(Stopwatch)); err != nil {
|
||||
return fmt.Errorf("Sync2: %v", err)
|
||||
}
|
||||
if err := x.Sync2(new(TrackedTime)); err != nil {
|
||||
return fmt.Errorf("Sync2: %v", err)
|
||||
}
|
||||
//Updating existing issue units
|
||||
units := make([]*RepoUnit, 0, 100)
|
||||
err := x.Where("`type` = ?", V16UnitTypeIssues).Find(&units)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Query repo units: %v", err)
|
||||
}
|
||||
for _, unit := range units {
|
||||
if unit.Config == nil {
|
||||
unit.Config = make(map[string]interface{})
|
||||
}
|
||||
if _, ok := unit.Config["EnableTimetracker"]; !ok {
|
||||
unit.Config["EnableTimetracker"] = setting.Service.DefaultEnableTimetracking
|
||||
}
|
||||
if _, ok := unit.Config["AllowOnlyContributorsToTrackTime"]; !ok {
|
||||
unit.Config["AllowOnlyContributorsToTrackTime"] = setting.Service.DefaultAllowOnlyContributorsToTrackTime
|
||||
}
|
||||
if _, err := x.ID(unit.ID).Cols("config").Update(unit); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
|
@ -0,0 +1,55 @@
|
|||
// Copyright 2017 The Gitea Authors. All rights reserved.
|
||||
// Use of this source code is governed by a MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package migrations
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"code.gitea.io/gitea/modules/log"
|
||||
"code.gitea.io/gitea/modules/setting"
|
||||
|
||||
"github.com/go-xorm/xorm"
|
||||
)
|
||||
|
||||
func migrateProtectedBranchStruct(x *xorm.Engine) error {
|
||||
type ProtectedBranch struct {
|
||||
ID int64 `xorm:"pk autoincr"`
|
||||
RepoID int64 `xorm:"UNIQUE(s)"`
|
||||
BranchName string `xorm:"UNIQUE(s)"`
|
||||
CanPush bool
|
||||
Created time.Time `xorm:"-"`
|
||||
CreatedUnix int64
|
||||
Updated time.Time `xorm:"-"`
|
||||
UpdatedUnix int64
|
||||
}
|
||||
|
||||
var pbs []ProtectedBranch
|
||||
err := x.Find(&pbs)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, pb := range pbs {
|
||||
if pb.CanPush {
|
||||
if _, err = x.ID(pb.ID).Delete(new(ProtectedBranch)); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
switch {
|
||||
case setting.UseSQLite3:
|
||||
log.Warn("Unable to drop columns in SQLite")
|
||||
case setting.UseMySQL, setting.UsePostgreSQL, setting.UseMSSQL, setting.UseTiDB:
|
||||
if _, err := x.Exec("ALTER TABLE protected_branch DROP COLUMN can_push"); err != nil {
|
||||
// Ignoring this error in case we run this migration second time (after migration reordering)
|
||||
log.Warn("DROP COLUMN can_push: %v", err)
|
||||
}
|
||||
default:
|
||||
log.Fatal(4, "Unrecognized DB")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
|
@ -0,0 +1,42 @@
|
|||
// Copyright 2017 The Gitea Authors. All rights reserved.
|
||||
// Use of this source code is governed by a MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package migrations
|
||||
|
||||
import (
|
||||
"code.gitea.io/gitea/models"
|
||||
"code.gitea.io/gitea/modules/log"
|
||||
|
||||
"github.com/go-xorm/xorm"
|
||||
)
|
||||
|
||||
func addDefaultValueToUserProhibitLogin(x *xorm.Engine) (err error) {
|
||||
user := &models.User{
|
||||
ProhibitLogin: false,
|
||||
}
|
||||
|
||||
if _, err := x.Where("`prohibit_login` IS NULL").Cols("prohibit_login").Update(user); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
dialect := x.Dialect().DriverName()
|
||||
|
||||
switch dialect {
|
||||
case "mysql":
|
||||
_, err = x.Exec("ALTER TABLE user MODIFY `prohibit_login` tinyint(1) NOT NULL DEFAULT 0")
|
||||
case "postgres":
|
||||
_, err = x.Exec("ALTER TABLE \"user\" ALTER COLUMN `prohibit_login` SET NOT NULL, ALTER COLUMN `prohibit_login` SET DEFAULT false")
|
||||
case "mssql":
|
||||
// xorm already set DEFAULT 0 for data type BIT in mssql
|
||||
_, err = x.Exec(`ALTER TABLE [user] ALTER COLUMN "prohibit_login" BIT NOT NULL`)
|
||||
case "sqlite3":
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
// Ignoring this error in case we run this migration second time (after migration reordering)
|
||||
log.Warn("Error changing user prohibit_login column definition: %v", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
|
@ -605,9 +605,14 @@ func (repo *Repository) RepoPath() string {
|
|||
return repo.repoPath(x)
|
||||
}
|
||||
|
||||
// GitConfigPath returns the path to a repository's git config/ directory
|
||||
func GitConfigPath(repoPath string) string {
|
||||
return filepath.Join(repoPath, "config")
|
||||
}
|
||||
|
||||
// GitConfigPath returns the repository git config path
|
||||
func (repo *Repository) GitConfigPath() string {
|
||||
return filepath.Join(repo.RepoPath(), "config")
|
||||
return GitConfigPath(repo.RepoPath())
|
||||
}
|
||||
|
||||
// RelLink returns the repository relative link
|
||||
|
@ -1496,20 +1501,6 @@ func TransferOwnership(doer *User, newOwnerName string, repo *Repository) error
|
|||
|
||||
// Remove old team-repository relations.
|
||||
if owner.IsOrganization() {
|
||||
if err = owner.getTeams(sess); err != nil {
|
||||
return fmt.Errorf("getTeams: %v", err)
|
||||
}
|
||||
for _, t := range owner.Teams {
|
||||
if !t.hasRepository(sess, repo.ID) {
|
||||
continue
|
||||
}
|
||||
|
||||
t.NumRepos--
|
||||
if _, err := sess.ID(t.ID).Cols("num_repos").Update(t); err != nil {
|
||||
return fmt.Errorf("decrease team repository count '%d': %v", t.ID, err)
|
||||
}
|
||||
}
|
||||
|
||||
if err = owner.removeOrgRepo(sess, repo.ID); err != nil {
|
||||
return fmt.Errorf("removeOrgRepo: %v", err)
|
||||
}
|
||||
|
|
|
@ -100,10 +100,6 @@ func populateRepoIndexer() error {
|
|||
}
|
||||
}
|
||||
|
||||
type updateBatch struct {
|
||||
updates []indexer.RepoIndexerUpdate
|
||||
}
|
||||
|
||||
func updateRepoIndexer(repo *Repository) error {
|
||||
changes, err := getRepoChanges(repo)
|
||||
if err != nil {
|
||||
|
@ -163,6 +159,10 @@ func addUpdate(filename string, repo *Repository, batch *indexer.Batch) error {
|
|||
return err
|
||||
} else if stat.Size() > setting.Indexer.MaxIndexerFileSize {
|
||||
return nil
|
||||
} else if stat.IsDir() {
|
||||
// file could actually be a directory, if it is the root of a submodule.
|
||||
// We do not index submodule contents, so don't do anything.
|
||||
return nil
|
||||
}
|
||||
fileContents, err := ioutil.ReadFile(filepath)
|
||||
if err != nil {
|
||||
|
|
|
@ -6,18 +6,18 @@ package models
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/Unknwon/com"
|
||||
"github.com/go-xorm/xorm"
|
||||
"gopkg.in/ini.v1"
|
||||
|
||||
"code.gitea.io/git"
|
||||
"code.gitea.io/gitea/modules/log"
|
||||
"code.gitea.io/gitea/modules/process"
|
||||
"code.gitea.io/gitea/modules/setting"
|
||||
"code.gitea.io/gitea/modules/sync"
|
||||
"code.gitea.io/gitea/modules/util"
|
||||
|
||||
"github.com/Unknwon/com"
|
||||
"github.com/go-xorm/xorm"
|
||||
"gopkg.in/ini.v1"
|
||||
)
|
||||
|
||||
// MirrorQueue holds an UniqueQueue object of the mirror
|
||||
|
@ -76,41 +76,41 @@ func (m *Mirror) ScheduleNextUpdate() {
|
|||
m.NextUpdate = time.Now().Add(m.Interval)
|
||||
}
|
||||
|
||||
func remoteAddress(repoPath string) (string, error) {
|
||||
cfg, err := ini.Load(GitConfigPath(repoPath))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return cfg.Section("remote \"origin\"").Key("url").Value(), nil
|
||||
}
|
||||
|
||||
func (m *Mirror) readAddress() {
|
||||
if len(m.address) > 0 {
|
||||
return
|
||||
}
|
||||
|
||||
cfg, err := ini.Load(m.Repo.GitConfigPath())
|
||||
var err error
|
||||
m.address, err = remoteAddress(m.Repo.RepoPath())
|
||||
if err != nil {
|
||||
log.Error(4, "Load: %v", err)
|
||||
return
|
||||
log.Error(4, "remoteAddress: %v", err)
|
||||
}
|
||||
m.address = cfg.Section("remote \"origin\"").Key("url").Value()
|
||||
}
|
||||
|
||||
// HandleCloneUserCredentials replaces user credentials from HTTP/HTTPS URL
|
||||
// with placeholder <credentials>.
|
||||
// It will fail for any other forms of clone addresses.
|
||||
func HandleCloneUserCredentials(url string, mosaics bool) string {
|
||||
i := strings.Index(url, "@")
|
||||
if i == -1 {
|
||||
return url
|
||||
// sanitizeOutput sanitizes output of a command, replacing occurrences of the
|
||||
// repository's remote address with a sanitized version.
|
||||
func sanitizeOutput(output, repoPath string) (string, error) {
|
||||
remoteAddr, err := remoteAddress(repoPath)
|
||||
if err != nil {
|
||||
// if we're unable to load the remote address, then we're unable to
|
||||
// sanitize.
|
||||
return "", err
|
||||
}
|
||||
start := strings.Index(url, "://")
|
||||
if start == -1 {
|
||||
return url
|
||||
}
|
||||
if mosaics {
|
||||
return url[:start+3] + "<credentials>" + url[i:]
|
||||
}
|
||||
return url[:start+3] + url[i+1:]
|
||||
return util.SanitizeMessage(output, remoteAddr), nil
|
||||
}
|
||||
|
||||
// Address returns mirror address from Git repository config without credentials.
|
||||
func (m *Mirror) Address() string {
|
||||
m.readAddress()
|
||||
return HandleCloneUserCredentials(m.address, false)
|
||||
return util.SanitizeURLCredentials(m.address, false)
|
||||
}
|
||||
|
||||
// FullAddress returns mirror address from Git repository config.
|
||||
|
@ -145,7 +145,14 @@ func (m *Mirror) runSync() bool {
|
|||
if _, stderr, err := process.GetManager().ExecDir(
|
||||
timeout, repoPath, fmt.Sprintf("Mirror.runSync: %s", repoPath),
|
||||
"git", gitArgs...); err != nil {
|
||||
desc := fmt.Sprintf("Failed to update mirror repository '%s': %s", repoPath, stderr)
|
||||
// sanitize the output, since it may contain the remote address, which may
|
||||
// contain a password
|
||||
message, err := sanitizeOutput(stderr, repoPath)
|
||||
if err != nil {
|
||||
log.Error(4, "sanitizeOutput: %v", err)
|
||||
return false
|
||||
}
|
||||
desc := fmt.Sprintf("Failed to update mirror repository '%s': %s", repoPath, message)
|
||||
log.Error(4, desc)
|
||||
if err = CreateRepositoryNotice(desc); err != nil {
|
||||
log.Error(4, "CreateRepositoryNotice: %v", err)
|
||||
|
@ -170,7 +177,14 @@ func (m *Mirror) runSync() bool {
|
|||
if _, stderr, err := process.GetManager().ExecDir(
|
||||
timeout, wikiPath, fmt.Sprintf("Mirror.runSync: %s", wikiPath),
|
||||
"git", "remote", "update", "--prune"); err != nil {
|
||||
desc := fmt.Sprintf("Failed to update mirror wiki repository '%s': %s", wikiPath, stderr)
|
||||
// sanitize the output, since it may contain the remote address, which may
|
||||
// contain a password
|
||||
message, err := sanitizeOutput(stderr, wikiPath)
|
||||
if err != nil {
|
||||
log.Error(4, "sanitizeOutput: %v", err)
|
||||
return false
|
||||
}
|
||||
desc := fmt.Sprintf("Failed to update mirror wiki repository '%s': %s", wikiPath, message)
|
||||
log.Error(4, desc)
|
||||
if err = CreateRepositoryNotice(desc); err != nil {
|
||||
log.Error(4, "CreateRepositoryNotice: %v", err)
|
||||
|
|
|
@ -153,3 +153,26 @@ func TestRepoLocalCopyPath(t *testing.T) {
|
|||
setting.Repository.Local.LocalCopyPath = tempPath
|
||||
assert.Equal(t, expected, repo.LocalCopyPath())
|
||||
}
|
||||
|
||||
func TestTransferOwnership(t *testing.T) {
|
||||
assert.NoError(t, PrepareTestDatabase())
|
||||
|
||||
doer := AssertExistsAndLoadBean(t, &User{ID: 2}).(*User)
|
||||
repo := AssertExistsAndLoadBean(t, &Repository{ID: 3}).(*Repository)
|
||||
repo.Owner = AssertExistsAndLoadBean(t, &User{ID: repo.OwnerID}).(*User)
|
||||
assert.NoError(t, TransferOwnership(doer, "user2", repo))
|
||||
|
||||
transferredRepo := AssertExistsAndLoadBean(t, &Repository{ID: 3}).(*Repository)
|
||||
assert.EqualValues(t, 2, transferredRepo.OwnerID)
|
||||
|
||||
assert.False(t, com.IsExist(RepoPath("user3", "repo3")))
|
||||
assert.True(t, com.IsExist(RepoPath("user2", "repo3")))
|
||||
AssertExistsAndLoadBean(t, &Action{
|
||||
OpType: ActionTransferRepo,
|
||||
ActUserID: 2,
|
||||
RepoID: 3,
|
||||
Content: "user3/repo3",
|
||||
})
|
||||
|
||||
CheckConsistencyFor(t, &Repository{}, &User{}, &Team{})
|
||||
}
|
||||
|
|
|
@ -315,10 +315,9 @@ func (u *User) generateRandomAvatar(e Engine) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// RelAvatarLink returns relative avatar link to the site domain,
|
||||
// which includes app sub-url as prefix. However, it is possible
|
||||
// to return full URL if user enables Gravatar-like service.
|
||||
func (u *User) RelAvatarLink() string {
|
||||
// SizedRelAvatarLink returns a relative link to the user's avatar. When
|
||||
// applicable, the link is for an avatar of the indicated size (in pixels).
|
||||
func (u *User) SizedRelAvatarLink(size int) string {
|
||||
if u.ID == -1 {
|
||||
return base.DefaultAvatarLink()
|
||||
}
|
||||
|
@ -338,7 +337,14 @@ func (u *User) RelAvatarLink() string {
|
|||
|
||||
return setting.AppSubURL + "/avatars/" + u.Avatar
|
||||
}
|
||||
return base.AvatarLink(u.AvatarEmail)
|
||||
return base.SizedAvatarLink(u.AvatarEmail, size)
|
||||
}
|
||||
|
||||
// RelAvatarLink returns a relative link to the user's avatar. The link
|
||||
// may either be a sub-URL to this site, or a full URL to an external avatar
|
||||
// service.
|
||||
func (u *User) RelAvatarLink() string {
|
||||
return u.SizedRelAvatarLink(base.DefaultAvatarSize)
|
||||
}
|
||||
|
||||
// AvatarLink returns user avatar absolute link.
|
||||
|
|
|
@ -16,6 +16,8 @@ import (
|
|||
"math"
|
||||
"math/big"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
@ -197,24 +199,59 @@ func DefaultAvatarLink() string {
|
|||
return setting.AppSubURL + "/img/avatar_default.png"
|
||||
}
|
||||
|
||||
// DefaultAvatarSize is a sentinel value for the default avatar size, as
|
||||
// determined by the avatar-hosting service.
|
||||
const DefaultAvatarSize = -1
|
||||
|
||||
// libravatarURL returns the URL for the given email. This function should only
|
||||
// be called if a federated avatar service is enabled.
|
||||
func libravatarURL(email string) (*url.URL, error) {
|
||||
urlStr, err := setting.LibravatarService.FromEmail(email)
|
||||
if err != nil {
|
||||
log.Error(4, "LibravatarService.FromEmail(email=%s): error %v", email, err)
|
||||
return nil, err
|
||||
}
|
||||
u, err := url.Parse(urlStr)
|
||||
if err != nil {
|
||||
log.Error(4, "Failed to parse libravatar url(%s): error %v", urlStr, err)
|
||||
return nil, err
|
||||
}
|
||||
return u, nil
|
||||
}
|
||||
|
||||
// SizedAvatarLink returns a sized link to the avatar for the given email
|
||||
// address.
|
||||
func SizedAvatarLink(email string, size int) string {
|
||||
var avatarURL *url.URL
|
||||
if setting.EnableFederatedAvatar && setting.LibravatarService != nil {
|
||||
var err error
|
||||
avatarURL, err = libravatarURL(email)
|
||||
if err != nil {
|
||||
return DefaultAvatarLink()
|
||||
}
|
||||
} else if !setting.DisableGravatar {
|
||||
// copy GravatarSourceURL, because we will modify its Path.
|
||||
copyOfGravatarSourceURL := *setting.GravatarSourceURL
|
||||
avatarURL = ©OfGravatarSourceURL
|
||||
avatarURL.Path = path.Join(avatarURL.Path, HashEmail(email))
|
||||
} else {
|
||||
return DefaultAvatarLink()
|
||||
}
|
||||
|
||||
vals := avatarURL.Query()
|
||||
vals.Set("d", "identicon")
|
||||
if size != DefaultAvatarSize {
|
||||
vals.Set("s", strconv.Itoa(size))
|
||||
}
|
||||
avatarURL.RawQuery = vals.Encode()
|
||||
return avatarURL.String()
|
||||
}
|
||||
|
||||
// AvatarLink returns relative avatar link to the site domain by given email,
|
||||
// which includes app sub-url as prefix. However, it is possible
|
||||
// to return full URL if user enables Gravatar-like service.
|
||||
func AvatarLink(email string) string {
|
||||
if setting.EnableFederatedAvatar && setting.LibravatarService != nil {
|
||||
url, err := setting.LibravatarService.FromEmail(email)
|
||||
if err != nil {
|
||||
log.Error(4, "LibravatarService.FromEmail(email=%s): error %v", email, err)
|
||||
return DefaultAvatarLink()
|
||||
}
|
||||
return url
|
||||
}
|
||||
|
||||
if !setting.DisableGravatar {
|
||||
return setting.GravatarSource + HashEmail(email) + "?d=identicon"
|
||||
}
|
||||
|
||||
return DefaultAvatarLink()
|
||||
return SizedAvatarLink(email, DefaultAvatarSize)
|
||||
}
|
||||
|
||||
// Seconds-based time units
|
||||
|
|
|
@ -1,11 +1,13 @@
|
|||
package base
|
||||
|
||||
import (
|
||||
"net/url"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"code.gitea.io/gitea/modules/setting"
|
||||
|
||||
"github.com/Unknwon/i18n"
|
||||
macaroni18n "github.com/go-macaron/i18n"
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
@ -126,16 +128,40 @@ func TestHashEmail(t *testing.T) {
|
|||
)
|
||||
}
|
||||
|
||||
func TestAvatarLink(t *testing.T) {
|
||||
const gravatarSource = "https://secure.gravatar.com/avatar/"
|
||||
|
||||
func disableGravatar() {
|
||||
setting.EnableFederatedAvatar = false
|
||||
setting.LibravatarService = nil
|
||||
setting.DisableGravatar = true
|
||||
}
|
||||
|
||||
assert.Equal(t, "/img/avatar_default.png", AvatarLink(""))
|
||||
|
||||
func enableGravatar(t *testing.T) {
|
||||
setting.DisableGravatar = false
|
||||
var err error
|
||||
setting.GravatarSourceURL, err = url.Parse(gravatarSource)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestSizedAvatarLink(t *testing.T) {
|
||||
disableGravatar()
|
||||
assert.Equal(t, "/img/avatar_default.png",
|
||||
SizedAvatarLink("gitea@example.com", 100))
|
||||
|
||||
enableGravatar(t)
|
||||
assert.Equal(t,
|
||||
"353cbad9b58e69c96154ad99f92bedc7?d=identicon",
|
||||
"https://secure.gravatar.com/avatar/353cbad9b58e69c96154ad99f92bedc7?d=identicon&s=100",
|
||||
SizedAvatarLink("gitea@example.com", 100),
|
||||
)
|
||||
}
|
||||
|
||||
func TestAvatarLink(t *testing.T) {
|
||||
disableGravatar()
|
||||
assert.Equal(t, "/img/avatar_default.png", AvatarLink("gitea@example.com"))
|
||||
|
||||
enableGravatar(t)
|
||||
assert.Equal(t,
|
||||
"https://secure.gravatar.com/avatar/353cbad9b58e69c96154ad99f92bedc7?d=identicon",
|
||||
AvatarLink("gitea@example.com"),
|
||||
)
|
||||
}
|
||||
|
|
|
@ -5,6 +5,9 @@
|
|||
package cache
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strconv"
|
||||
|
||||
"code.gitea.io/gitea/modules/setting"
|
||||
|
||||
mc "github.com/go-macaron/cache"
|
||||
|
@ -42,7 +45,18 @@ func GetInt(key string, getFunc func() (int, error)) (int, error) {
|
|||
}
|
||||
conn.Put(key, value, int64(setting.CacheService.TTL.Seconds()))
|
||||
}
|
||||
return conn.Get(key).(int), nil
|
||||
switch value := conn.Get(key).(type) {
|
||||
case int:
|
||||
return value, nil
|
||||
case string:
|
||||
v, err := strconv.Atoi(value)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return v, nil
|
||||
default:
|
||||
return 0, fmt.Errorf("Unsupported cached value type: %v", value)
|
||||
}
|
||||
}
|
||||
|
||||
// GetInt64 returns key value from cache with callback when no key exists in cache
|
||||
|
@ -60,7 +74,18 @@ func GetInt64(key string, getFunc func() (int64, error)) (int64, error) {
|
|||
}
|
||||
conn.Put(key, value, int64(setting.CacheService.TTL.Seconds()))
|
||||
}
|
||||
return conn.Get(key).(int64), nil
|
||||
switch value := conn.Get(key).(type) {
|
||||
case int64:
|
||||
return value, nil
|
||||
case string:
|
||||
v, err := strconv.ParseInt(value, 10, 64)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return v, nil
|
||||
default:
|
||||
return 0, fmt.Errorf("Unsupported cached value type: %v", value)
|
||||
}
|
||||
}
|
||||
|
||||
// Remove key from cache
|
||||
|
|
|
@ -182,7 +182,7 @@ func Contexter() macaron.Handler {
|
|||
branchName = repo.DefaultBranch
|
||||
}
|
||||
}
|
||||
prefix := setting.AppURL + path.Join(ownerName, repoName, "src", branchName)
|
||||
prefix := setting.AppURL + path.Join(ownerName, repoName, "src", "branch", branchName)
|
||||
c.PlainText(http.StatusOK, []byte(com.Expand(`
|
||||
<html>
|
||||
<head>
|
||||
|
|
|
@ -143,6 +143,9 @@ func (r *Repository) GetEditorconfig() (*editorconfig.Editorconfig, error) {
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if treeEntry.Blob().Size() >= setting.UI.MaxDisplayFileSize {
|
||||
return nil, git.ErrNotExist{ID: "", RelPath: ".editorconfig"}
|
||||
}
|
||||
reader, err := treeEntry.Blob().Data()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
@ -453,7 +456,7 @@ func RepoAssignment() macaron.Handler {
|
|||
|
||||
if ctx.Query("go-get") == "1" {
|
||||
ctx.Data["GoGetImport"] = ComposeGoGetImport(owner.Name, repo.Name)
|
||||
prefix := setting.AppURL + path.Join(owner.Name, repo.Name, "src", ctx.Repo.BranchName)
|
||||
prefix := setting.AppURL + path.Join(owner.Name, repo.Name, "src", "branch", ctx.Repo.BranchName)
|
||||
ctx.Data["GoDocDirectory"] = prefix + "{/dir}"
|
||||
ctx.Data["GoDocFile"] = prefix + "{/dir}/{file}#L{line}"
|
||||
}
|
||||
|
@ -615,7 +618,11 @@ func RepoRefByType(refType RepoRefType) macaron.Handler {
|
|||
|
||||
if refType == RepoRefLegacy {
|
||||
// redirect from old URL scheme to new URL scheme
|
||||
ctx.Redirect(path.Join(setting.AppSubURL, strings.TrimSuffix(ctx.Req.URL.String(), ctx.Params("*")), ctx.Repo.BranchNameSubURL()))
|
||||
ctx.Redirect(path.Join(
|
||||
setting.AppSubURL,
|
||||
strings.TrimSuffix(ctx.Req.URL.String(), ctx.Params("*")),
|
||||
ctx.Repo.BranchNameSubURL(),
|
||||
ctx.Repo.TreePath))
|
||||
return
|
||||
}
|
||||
}
|
||||
|
|
|
@ -208,7 +208,7 @@ func (s *sendmailSender) Send(from string, to []string, msg io.WriterTo) error {
|
|||
var closeError error
|
||||
var waitError error
|
||||
|
||||
args := []string{"-F", from, "-i"}
|
||||
args := []string{"-f", from, "-i"}
|
||||
args = append(args, setting.MailService.SendmailArgs...)
|
||||
args = append(args, to...)
|
||||
log.Trace("Sending with: %s %v", setting.MailService.SendmailPath, args)
|
||||
|
|
|
@ -325,6 +325,7 @@ var (
|
|||
// Picture settings
|
||||
AvatarUploadPath string
|
||||
GravatarSource string
|
||||
GravatarSourceURL *url.URL
|
||||
DisableGravatar bool
|
||||
EnableFederatedAvatar bool
|
||||
LibravatarService *libravatar.Libravatar
|
||||
|
@ -1026,18 +1027,22 @@ func NewContext() {
|
|||
if DisableGravatar {
|
||||
EnableFederatedAvatar = false
|
||||
}
|
||||
if EnableFederatedAvatar || !DisableGravatar {
|
||||
GravatarSourceURL, err = url.Parse(GravatarSource)
|
||||
if err != nil {
|
||||
log.Fatal(4, "Failed to parse Gravatar URL(%s): %v",
|
||||
GravatarSource, err)
|
||||
}
|
||||
}
|
||||
|
||||
if EnableFederatedAvatar {
|
||||
LibravatarService = libravatar.New()
|
||||
parts := strings.Split(GravatarSource, "/")
|
||||
if len(parts) >= 3 {
|
||||
if parts[0] == "https:" {
|
||||
LibravatarService.SetUseHTTPS(true)
|
||||
LibravatarService.SetSecureFallbackHost(parts[2])
|
||||
} else {
|
||||
LibravatarService.SetUseHTTPS(false)
|
||||
LibravatarService.SetFallbackHost(parts[2])
|
||||
}
|
||||
if GravatarSourceURL.Scheme == "https" {
|
||||
LibravatarService.SetUseHTTPS(true)
|
||||
LibravatarService.SetSecureFallbackHost(GravatarSourceURL.Host)
|
||||
} else {
|
||||
LibravatarService.SetUseHTTPS(false)
|
||||
LibravatarService.SetFallbackHost(GravatarSourceURL.Host)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1166,7 +1171,7 @@ func newService() {
|
|||
Service.NoReplyAddress = sec.Key("NO_REPLY_ADDRESS").MustString("noreply.example.org")
|
||||
|
||||
sec = Cfg.Section("openid")
|
||||
Service.EnableOpenIDSignIn = sec.Key("ENABLE_OPENID_SIGNIN").MustBool(false)
|
||||
Service.EnableOpenIDSignIn = sec.Key("ENABLE_OPENID_SIGNIN").MustBool(!InstallLock)
|
||||
Service.EnableOpenIDSignUp = sec.Key("ENABLE_OPENID_SIGNUP").MustBool(!Service.DisableRegistration && Service.EnableOpenIDSignIn)
|
||||
pats := sec.Key("WHITELISTED_URIS").Strings(" ")
|
||||
if len(pats) != 0 {
|
||||
|
@ -1392,7 +1397,7 @@ func newSessionService() {
|
|||
SessionConfig.Provider = Cfg.Section("session").Key("PROVIDER").In("memory",
|
||||
[]string{"memory", "file", "redis", "mysql"})
|
||||
SessionConfig.ProviderConfig = strings.Trim(Cfg.Section("session").Key("PROVIDER_CONFIG").MustString(path.Join(AppDataPath, "sessions")), "\" ")
|
||||
if !filepath.IsAbs(SessionConfig.ProviderConfig) {
|
||||
if SessionConfig.Provider == "file" && !filepath.IsAbs(SessionConfig.ProviderConfig) {
|
||||
SessionConfig.ProviderConfig = path.Join(AppWorkPath, SessionConfig.ProviderConfig)
|
||||
}
|
||||
SessionConfig.CookieName = Cfg.Section("session").Key("COOKIE_NAME").MustString("i_like_gitea")
|
||||
|
|
|
@ -9,6 +9,7 @@ import (
|
|||
"container/list"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"html"
|
||||
"html/template"
|
||||
"mime"
|
||||
"path/filepath"
|
||||
|
@ -16,16 +17,15 @@ import (
|
|||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/microcosm-cc/bluemonday"
|
||||
"golang.org/x/net/html/charset"
|
||||
"golang.org/x/text/transform"
|
||||
"gopkg.in/editorconfig/editorconfig-core-go.v1"
|
||||
|
||||
"code.gitea.io/gitea/models"
|
||||
"code.gitea.io/gitea/modules/base"
|
||||
"code.gitea.io/gitea/modules/log"
|
||||
"code.gitea.io/gitea/modules/markup"
|
||||
"code.gitea.io/gitea/modules/setting"
|
||||
|
||||
"golang.org/x/net/html/charset"
|
||||
"golang.org/x/text/transform"
|
||||
"gopkg.in/editorconfig/editorconfig-core-go.v1"
|
||||
)
|
||||
|
||||
// NewFuncMap returns functions for injecting to templates
|
||||
|
@ -67,7 +67,6 @@ func NewFuncMap() []template.FuncMap {
|
|||
"AvatarLink": base.AvatarLink,
|
||||
"Safe": Safe,
|
||||
"SafeJS": SafeJS,
|
||||
"Sanitize": bluemonday.UGCPolicy().Sanitize,
|
||||
"Str2html": Str2html,
|
||||
"TimeSince": base.TimeSince,
|
||||
"RawTimeSince": base.RawTimeSince,
|
||||
|
@ -164,6 +163,7 @@ func NewFuncMap() []template.FuncMap {
|
|||
"UnescapeLocale": func(str string) string {
|
||||
return strings.NewReplacer("\\;", ";", "\\#", "#").Replace(str)
|
||||
},
|
||||
"Escape": Escape,
|
||||
}}
|
||||
}
|
||||
|
||||
|
@ -182,6 +182,11 @@ func Str2html(raw string) template.HTML {
|
|||
return template.HTML(markup.Sanitize(raw))
|
||||
}
|
||||
|
||||
// Escape escapes a HTML string
|
||||
func Escape(raw string) string {
|
||||
return html.EscapeString(raw)
|
||||
}
|
||||
|
||||
// List traversings the list
|
||||
func List(l *list.List) chan interface{} {
|
||||
e := l.Front()
|
||||
|
|
|
@ -11,6 +11,7 @@ import (
|
|||
|
||||
"code.gitea.io/gitea/modules/context"
|
||||
|
||||
"github.com/go-macaron/session"
|
||||
"github.com/stretchr/testify/assert"
|
||||
macaron "gopkg.in/macaron.v1"
|
||||
)
|
||||
|
@ -33,6 +34,9 @@ func MockContext(t *testing.T) *context.Context {
|
|||
macaronContext.Render = &mockRender{ResponseWriter: macaronContext.Resp}
|
||||
return &context.Context{
|
||||
Context: macaronContext,
|
||||
Flash: &session.Flash{
|
||||
Values: make(url.Values),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -0,0 +1,48 @@
|
|||
// Copyright 2017 The Gitea Authors. All rights reserved.
|
||||
// Use of this source code is governed by a MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package util
|
||||
|
||||
import (
|
||||
"net/url"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// urlSafeError wraps an error whose message may contain a sensitive URL
|
||||
type urlSafeError struct {
|
||||
err error
|
||||
unsanitizedURL string
|
||||
}
|
||||
|
||||
func (err urlSafeError) Error() string {
|
||||
return SanitizeMessage(err.err.Error(), err.unsanitizedURL)
|
||||
}
|
||||
|
||||
// URLSanitizedError returns the sanitized version an error whose message may
|
||||
// contain a sensitive URL
|
||||
func URLSanitizedError(err error, unsanitizedURL string) error {
|
||||
return urlSafeError{err: err, unsanitizedURL: unsanitizedURL}
|
||||
}
|
||||
|
||||
// SanitizeMessage sanitizes a message which may contains a sensitive URL
|
||||
func SanitizeMessage(message, unsanitizedURL string) string {
|
||||
sanitizedURL := SanitizeURLCredentials(unsanitizedURL, true)
|
||||
return strings.Replace(message, unsanitizedURL, sanitizedURL, -1)
|
||||
}
|
||||
|
||||
// SanitizeURLCredentials sanitizes a url, either removing user credentials
|
||||
// or replacing them with a placeholder.
|
||||
func SanitizeURLCredentials(unsanitizedURL string, usePlaceholder bool) string {
|
||||
u, err := url.Parse(unsanitizedURL)
|
||||
if err != nil {
|
||||
// don't log the error, since it might contain unsanitized URL.
|
||||
return "(unparsable url)"
|
||||
}
|
||||
if u.User != nil && usePlaceholder {
|
||||
u.User = url.User("<credentials>")
|
||||
} else {
|
||||
u.User = nil
|
||||
}
|
||||
return u.String()
|
||||
}
|
|
@ -627,8 +627,8 @@ issues.label_templates.info=Il n'y a pas encore d'étiquettes. Vous pouvez cliqu
|
|||
issues.label_templates.helper=Sélectionnez un ensemble d'étiquettes
|
||||
issues.label_templates.use=Utiliser ce jeu d'étiquettes
|
||||
issues.label_templates.fail_to_load_file=Impossible de charger le fichier de modèle étiquette '%s' : %v
|
||||
issues.add_label_at=« enlevé la <div class="ui label" style="color: %s\; background-color: %s">%s</div> étiquette %s »
|
||||
issues.remove_label_at=« enlevé la <div class="ui label" style="color: %s\; background-color: %s">%s</div> étiquette %s »
|
||||
issues.add_label_at=`a ajouté l'étiquette <div class="ui label" style="color: %s\; background-color: %s">%s</div> %s`
|
||||
issues.remove_label_at=`a supprimé l'étiquette <div class="ui label" style="color: %s\; background-color: %s">%s</div> %s`
|
||||
issues.add_milestone_at=`a ajouté cela au jalon <b>%s</b> %s`
|
||||
issues.change_milestone_at=`a modifié le jalon de <b>%s</b> à <b>%s</b> %s`
|
||||
issues.remove_milestone_at=`a supprimé cela du jalon <b>%s</b> %s`
|
||||
|
|
|
@ -9,8 +9,6 @@ import (
|
|||
"net/http"
|
||||
"strings"
|
||||
|
||||
api "code.gitea.io/sdk/gitea"
|
||||
|
||||
"code.gitea.io/gitea/models"
|
||||
"code.gitea.io/gitea/modules/auth"
|
||||
"code.gitea.io/gitea/modules/context"
|
||||
|
@ -18,6 +16,7 @@ import (
|
|||
"code.gitea.io/gitea/modules/setting"
|
||||
"code.gitea.io/gitea/modules/util"
|
||||
"code.gitea.io/gitea/routers/api/v1/convert"
|
||||
api "code.gitea.io/sdk/gitea"
|
||||
)
|
||||
|
||||
// Search repositories via options
|
||||
|
@ -322,12 +321,13 @@ func Migrate(ctx *context.APIContext, form auth.MigrateRepoForm) {
|
|||
RemoteAddr: remoteAddr,
|
||||
})
|
||||
if err != nil {
|
||||
err = util.URLSanitizedError(err, remoteAddr)
|
||||
if repo != nil {
|
||||
if errDelete := models.DeleteRepository(ctx.User, ctxUser.ID, repo.ID); errDelete != nil {
|
||||
log.Error(4, "DeleteRepository: %v", errDelete)
|
||||
}
|
||||
}
|
||||
ctx.Error(500, "MigrateRepository", models.HandleCloneUserCredentials(err.Error(), true))
|
||||
ctx.Error(500, "MigrateRepository", err)
|
||||
return
|
||||
}
|
||||
|
||||
|
|
|
@ -108,8 +108,8 @@ func Install(ctx *context.Context) {
|
|||
form.OfflineMode = setting.OfflineMode
|
||||
form.DisableGravatar = setting.DisableGravatar
|
||||
form.EnableFederatedAvatar = setting.EnableFederatedAvatar
|
||||
form.EnableOpenIDSignIn = true
|
||||
form.EnableOpenIDSignUp = true
|
||||
form.EnableOpenIDSignIn = setting.Service.EnableOpenIDSignIn
|
||||
form.EnableOpenIDSignUp = setting.Service.EnableOpenIDSignUp
|
||||
form.DisableRegistration = setting.Service.DisableRegistration
|
||||
form.EnableCaptcha = setting.Service.EnableCaptcha
|
||||
form.RequireSignInView = setting.Service.RequireSignInView
|
||||
|
|
|
@ -6,6 +6,7 @@ package org
|
|||
|
||||
import (
|
||||
"path"
|
||||
"strings"
|
||||
|
||||
"github.com/Unknwon/com"
|
||||
|
||||
|
@ -14,6 +15,7 @@ import (
|
|||
"code.gitea.io/gitea/modules/base"
|
||||
"code.gitea.io/gitea/modules/context"
|
||||
"code.gitea.io/gitea/modules/log"
|
||||
"code.gitea.io/gitea/routers/utils"
|
||||
)
|
||||
|
||||
const (
|
||||
|
@ -75,7 +77,7 @@ func TeamsAction(ctx *context.Context) {
|
|||
ctx.Error(404)
|
||||
return
|
||||
}
|
||||
uname := ctx.Query("uname")
|
||||
uname := utils.RemoveUsernameParameterSuffix(strings.ToLower(ctx.Query("uname")))
|
||||
var u *models.User
|
||||
u, err = models.GetUserByName(uname)
|
||||
if err != nil {
|
||||
|
|
|
@ -250,11 +250,11 @@ func Diff(ctx *context.Context) {
|
|||
ctx.Data["Diff"] = diff
|
||||
ctx.Data["Parents"] = parents
|
||||
ctx.Data["DiffNotAvailable"] = diff.NumFiles() == 0
|
||||
ctx.Data["SourcePath"] = setting.AppSubURL + "/" + path.Join(userName, repoName, "src", commitID)
|
||||
ctx.Data["SourcePath"] = setting.AppSubURL + "/" + path.Join(userName, repoName, "src", "commit", commitID)
|
||||
if commit.ParentCount() > 0 {
|
||||
ctx.Data["BeforeSourcePath"] = setting.AppSubURL + "/" + path.Join(userName, repoName, "src", parents[0])
|
||||
ctx.Data["BeforeSourcePath"] = setting.AppSubURL + "/" + path.Join(userName, repoName, "src", "commit", parents[0])
|
||||
}
|
||||
ctx.Data["RawPath"] = setting.AppSubURL + "/" + path.Join(userName, repoName, "raw", commitID)
|
||||
ctx.Data["RawPath"] = setting.AppSubURL + "/" + path.Join(userName, repoName, "raw", "commit", commitID)
|
||||
ctx.HTML(200, tplDiff)
|
||||
}
|
||||
|
||||
|
@ -315,9 +315,9 @@ func CompareDiff(ctx *context.Context) {
|
|||
ctx.Data["Commit"] = commit
|
||||
ctx.Data["Diff"] = diff
|
||||
ctx.Data["DiffNotAvailable"] = diff.NumFiles() == 0
|
||||
ctx.Data["SourcePath"] = setting.AppSubURL + "/" + path.Join(userName, repoName, "src", afterCommitID)
|
||||
ctx.Data["BeforeSourcePath"] = setting.AppSubURL + "/" + path.Join(userName, repoName, "src", beforeCommitID)
|
||||
ctx.Data["RawPath"] = setting.AppSubURL + "/" + path.Join(userName, repoName, "raw", afterCommitID)
|
||||
ctx.Data["SourcePath"] = setting.AppSubURL + "/" + path.Join(userName, repoName, "src", "commit", afterCommitID)
|
||||
ctx.Data["BeforeSourcePath"] = setting.AppSubURL + "/" + path.Join(userName, repoName, "src", "commit", beforeCommitID)
|
||||
ctx.Data["RawPath"] = setting.AppSubURL + "/" + path.Join(userName, repoName, "raw", "commit", afterCommitID)
|
||||
ctx.Data["RequireHighlightJS"] = true
|
||||
ctx.HTML(200, tplDiff)
|
||||
}
|
||||
|
|
|
@ -45,10 +45,11 @@ func ServeData(ctx *context.Context, name string, reader io.Reader) error {
|
|||
|
||||
// ServeBlob download a git.Blob
|
||||
func ServeBlob(ctx *context.Context, blob *git.Blob) error {
|
||||
dataRc, err := blob.Data()
|
||||
dataRc, err := blob.DataAsync()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer dataRc.Close()
|
||||
|
||||
return ServeData(ctx, ctx.Repo.TreePath, dataRc)
|
||||
}
|
||||
|
|
|
@ -73,11 +73,16 @@ func editFile(ctx *context.Context, isNewFile bool) {
|
|||
|
||||
// No way to edit a directory online.
|
||||
if entry.IsDir() {
|
||||
ctx.Handle(404, "", nil)
|
||||
ctx.Handle(404, "entry.IsDir", nil)
|
||||
return
|
||||
}
|
||||
|
||||
blob := entry.Blob()
|
||||
if blob.Size() >= setting.UI.MaxDisplayFileSize {
|
||||
ctx.Handle(404, "blob.Size", err)
|
||||
return
|
||||
}
|
||||
|
||||
dataRc, err := blob.Data()
|
||||
if err != nil {
|
||||
ctx.Handle(404, "blob.Data", err)
|
||||
|
@ -93,7 +98,7 @@ func editFile(ctx *context.Context, isNewFile bool) {
|
|||
|
||||
// Only text file are editable online.
|
||||
if !base.IsTextFile(buf) {
|
||||
ctx.Handle(404, "", nil)
|
||||
ctx.Handle(404, "base.IsTextFile", nil)
|
||||
return
|
||||
}
|
||||
|
||||
|
|
|
@ -319,6 +319,9 @@ func getFileContentFromDefaultBranch(ctx *context.Context, filename string) (str
|
|||
if err != nil {
|
||||
return "", false
|
||||
}
|
||||
if entry.Blob().Size() >= setting.UI.MaxDisplayFileSize {
|
||||
return "", false
|
||||
}
|
||||
r, err = entry.Blob().Data()
|
||||
if err != nil {
|
||||
return "", false
|
||||
|
|
|
@ -257,12 +257,24 @@ func PrepareMergedViewPullInfo(ctx *context.Context, issue *models.Issue) {
|
|||
setMergeTarget(ctx, pull)
|
||||
ctx.Data["HasMerged"] = true
|
||||
|
||||
ctx.Data["NumCommits"], err = ctx.Repo.GitRepo.CommitsCountBetween(pull.MergeBase, pull.MergedCommitID)
|
||||
mergedCommit, err := ctx.Repo.GitRepo.GetCommit(pull.MergedCommitID)
|
||||
if err != nil {
|
||||
ctx.Handle(500, "GetCommit", err)
|
||||
return
|
||||
}
|
||||
// the ID of the last commit in the PR (not including the merge commit)
|
||||
endCommitID, err := mergedCommit.ParentID(mergedCommit.ParentCount() - 1)
|
||||
if err != nil {
|
||||
ctx.Handle(500, "ParentID", err)
|
||||
return
|
||||
}
|
||||
|
||||
ctx.Data["NumCommits"], err = ctx.Repo.GitRepo.CommitsCountBetween(pull.MergeBase, endCommitID.String())
|
||||
if err != nil {
|
||||
ctx.Handle(500, "Repo.GitRepo.CommitsCountBetween", err)
|
||||
return
|
||||
}
|
||||
ctx.Data["NumFiles"], err = ctx.Repo.GitRepo.FilesCountBetween(pull.MergeBase, pull.MergedCommitID)
|
||||
ctx.Data["NumFiles"], err = ctx.Repo.GitRepo.FilesCountBetween(pull.MergeBase, endCommitID.String())
|
||||
if err != nil {
|
||||
ctx.Handle(500, "Repo.GitRepo.FilesCountBetween", err)
|
||||
return
|
||||
|
@ -338,19 +350,19 @@ func ViewPullCommits(ctx *context.Context) {
|
|||
ctx.Data["Username"] = ctx.Repo.Owner.Name
|
||||
ctx.Data["Reponame"] = ctx.Repo.Repository.Name
|
||||
|
||||
startCommit, err := ctx.Repo.GitRepo.GetCommit(pull.MergeBase)
|
||||
mergedCommit, err := ctx.Repo.GitRepo.GetCommit(pull.MergedCommitID)
|
||||
if err != nil {
|
||||
ctx.Handle(500, "Repo.GitRepo.GetCommit", err)
|
||||
return
|
||||
}
|
||||
endCommit, err := ctx.Repo.GitRepo.GetCommit(pull.MergedCommitID)
|
||||
endCommitID, err := mergedCommit.ParentID(mergedCommit.ParentCount() - 1)
|
||||
if err != nil {
|
||||
ctx.Handle(500, "Repo.GitRepo.GetCommit", err)
|
||||
ctx.Handle(500, "ParentID", err)
|
||||
return
|
||||
}
|
||||
commits, err = ctx.Repo.GitRepo.CommitsBetween(endCommit, startCommit)
|
||||
commits, err = ctx.Repo.GitRepo.CommitsBetweenIDs(endCommitID.String(), pull.MergeBase)
|
||||
if err != nil {
|
||||
ctx.Handle(500, "Repo.GitRepo.CommitsBetween", err)
|
||||
ctx.Handle(500, "Repo.GitRepo.CommitsBetweenIDs", err)
|
||||
return
|
||||
}
|
||||
} else {
|
||||
|
@ -402,7 +414,17 @@ func ViewPullFiles(ctx *context.Context) {
|
|||
|
||||
diffRepoPath = ctx.Repo.GitRepo.Path
|
||||
startCommitID = pull.MergeBase
|
||||
endCommitID = pull.MergedCommitID
|
||||
mergedCommit, err := ctx.Repo.GitRepo.GetCommit(pull.MergedCommitID)
|
||||
if err != nil {
|
||||
ctx.Handle(500, "GetCommit", err)
|
||||
return
|
||||
}
|
||||
endCommitSha, err := mergedCommit.ParentID(mergedCommit.ParentCount() - 1)
|
||||
if err != nil {
|
||||
ctx.Handle(500, "ParentID", err)
|
||||
return
|
||||
}
|
||||
endCommitID = endCommitSha.String()
|
||||
gitRepo = ctx.Repo.GitRepo
|
||||
|
||||
headTarget = path.Join(ctx.Repo.Owner.Name, ctx.Repo.Repository.Name)
|
||||
|
@ -458,9 +480,9 @@ func ViewPullFiles(ctx *context.Context) {
|
|||
}
|
||||
|
||||
ctx.Data["IsImageFile"] = commit.IsImageFile
|
||||
ctx.Data["SourcePath"] = setting.AppSubURL + "/" + path.Join(headTarget, "src", endCommitID)
|
||||
ctx.Data["BeforeSourcePath"] = setting.AppSubURL + "/" + path.Join(headTarget, "src", startCommitID)
|
||||
ctx.Data["RawPath"] = setting.AppSubURL + "/" + path.Join(headTarget, "raw", endCommitID)
|
||||
ctx.Data["SourcePath"] = setting.AppSubURL + "/" + path.Join(headTarget, "src", "commit", endCommitID)
|
||||
ctx.Data["BeforeSourcePath"] = setting.AppSubURL + "/" + path.Join(headTarget, "src", "commit", startCommitID)
|
||||
ctx.Data["RawPath"] = setting.AppSubURL + "/" + path.Join(headTarget, "raw", "commit", endCommitID)
|
||||
ctx.Data["RequireHighlightJS"] = true
|
||||
|
||||
ctx.HTML(200, tplPullFiles)
|
||||
|
@ -667,9 +689,9 @@ func PrepareCompareDiff(
|
|||
ctx.Data["IsImageFile"] = headCommit.IsImageFile
|
||||
|
||||
headTarget := path.Join(headUser.Name, repo.Name)
|
||||
ctx.Data["SourcePath"] = setting.AppSubURL + "/" + path.Join(headTarget, "src", headCommitID)
|
||||
ctx.Data["BeforeSourcePath"] = setting.AppSubURL + "/" + path.Join(headTarget, "src", prInfo.MergeBase)
|
||||
ctx.Data["RawPath"] = setting.AppSubURL + "/" + path.Join(headTarget, "raw", headCommitID)
|
||||
ctx.Data["SourcePath"] = setting.AppSubURL + "/" + path.Join(headTarget, "src", "commit", headCommitID)
|
||||
ctx.Data["BeforeSourcePath"] = setting.AppSubURL + "/" + path.Join(headTarget, "src", "commit", prInfo.MergeBase)
|
||||
ctx.Data["RawPath"] = setting.AppSubURL + "/" + path.Join(headTarget, "raw", "commit", headCommitID)
|
||||
return false
|
||||
}
|
||||
|
||||
|
|
|
@ -191,6 +191,7 @@ func NewReleasePost(ctx *context.Context, form auth.NewReleaseForm) {
|
|||
|
||||
rel.Title = form.Title
|
||||
rel.Note = form.Content
|
||||
rel.Target = form.Target
|
||||
rel.IsDraft = len(form.Draft) > 0
|
||||
rel.IsPrerelease = form.Prerelease
|
||||
rel.PublisherID = ctx.User.ID
|
||||
|
|
|
@ -20,6 +20,7 @@ import (
|
|||
"code.gitea.io/gitea/modules/context"
|
||||
"code.gitea.io/gitea/modules/log"
|
||||
"code.gitea.io/gitea/modules/setting"
|
||||
"code.gitea.io/gitea/modules/util"
|
||||
)
|
||||
|
||||
const (
|
||||
|
@ -232,6 +233,9 @@ func MigratePost(ctx *context.Context, form auth.MigrateRepoForm) {
|
|||
return
|
||||
}
|
||||
|
||||
// remoteAddr may contain credentials, so we sanitize it
|
||||
err = util.URLSanitizedError(err, remoteAddr)
|
||||
|
||||
if repo != nil {
|
||||
if errDelete := models.DeleteRepository(ctx.User, ctxUser.ID, repo.ID); errDelete != nil {
|
||||
log.Error(4, "DeleteRepository: %v", errDelete)
|
||||
|
@ -241,11 +245,11 @@ func MigratePost(ctx *context.Context, form auth.MigrateRepoForm) {
|
|||
if strings.Contains(err.Error(), "Authentication failed") ||
|
||||
strings.Contains(err.Error(), "could not read Username") {
|
||||
ctx.Data["Err_Auth"] = true
|
||||
ctx.RenderWithErr(ctx.Tr("form.auth_failed", models.HandleCloneUserCredentials(err.Error(), true)), tplMigrate, &form)
|
||||
ctx.RenderWithErr(ctx.Tr("form.auth_failed", err.Error()), tplMigrate, &form)
|
||||
return
|
||||
} else if strings.Contains(err.Error(), "fatal:") {
|
||||
ctx.Data["Err_CloneAddr"] = true
|
||||
ctx.RenderWithErr(ctx.Tr("repo.migrate.failed", models.HandleCloneUserCredentials(err.Error(), true)), tplMigrate, &form)
|
||||
ctx.RenderWithErr(ctx.Tr("repo.migrate.failed", err.Error()), tplMigrate, &form)
|
||||
return
|
||||
}
|
||||
|
||||
|
|
|
@ -38,7 +38,7 @@ func Search(ctx *context.Context) {
|
|||
pager := paginater.New(total, setting.UI.RepoSearchPagingNum, page, 5)
|
||||
ctx.Data["Page"] = pager
|
||||
ctx.Data["SourcePath"] = setting.AppSubURL + "/" +
|
||||
path.Join(ctx.Repo.Repository.Owner.Name, ctx.Repo.Repository.Name, "src", ctx.Repo.Repository.DefaultBranch)
|
||||
path.Join(ctx.Repo.Repository.Owner.Name, ctx.Repo.Repository.Name, "src", "branch", ctx.Repo.Repository.DefaultBranch)
|
||||
ctx.Data["SearchResults"] = searchResults
|
||||
ctx.Data["RequireHighlightJS"] = true
|
||||
ctx.Data["PageIsViewCode"] = true
|
||||
|
|
|
@ -16,6 +16,7 @@ import (
|
|||
"code.gitea.io/gitea/modules/context"
|
||||
"code.gitea.io/gitea/modules/log"
|
||||
"code.gitea.io/gitea/modules/setting"
|
||||
"code.gitea.io/gitea/routers/utils"
|
||||
)
|
||||
|
||||
const (
|
||||
|
@ -366,7 +367,7 @@ func Collaboration(ctx *context.Context) {
|
|||
|
||||
// CollaborationPost response for actions for a collaboration of a repository
|
||||
func CollaborationPost(ctx *context.Context) {
|
||||
name := strings.ToLower(ctx.Query("collaborator"))
|
||||
name := utils.RemoveUsernameParameterSuffix(strings.ToLower(ctx.Query("collaborator")))
|
||||
if len(name) == 0 || ctx.Repo.Owner.LowerName == name {
|
||||
ctx.Redirect(setting.AppSubURL + ctx.Req.URL.Path)
|
||||
return
|
||||
|
|
|
@ -76,11 +76,12 @@ func renderDirectory(ctx *context.Context, treeLink string) {
|
|||
ctx.Data["ReadmeInList"] = true
|
||||
ctx.Data["ReadmeExist"] = true
|
||||
|
||||
dataRc, err := readmeFile.Data()
|
||||
dataRc, err := readmeFile.DataAsync()
|
||||
if err != nil {
|
||||
ctx.Handle(500, "Data", err)
|
||||
return
|
||||
}
|
||||
defer dataRc.Close()
|
||||
|
||||
buf := make([]byte, 1024)
|
||||
n, _ := dataRc.Read(buf)
|
||||
|
@ -91,14 +92,21 @@ func renderDirectory(ctx *context.Context, treeLink string) {
|
|||
ctx.Data["FileName"] = readmeFile.Name()
|
||||
// FIXME: what happens when README file is an image?
|
||||
if isTextFile {
|
||||
d, _ := ioutil.ReadAll(dataRc)
|
||||
buf = append(buf, d...)
|
||||
if markup.Type(readmeFile.Name()) != "" {
|
||||
ctx.Data["IsMarkup"] = true
|
||||
ctx.Data["FileContent"] = string(markup.Render(readmeFile.Name(), buf, treeLink, ctx.Repo.Repository.ComposeMetas()))
|
||||
if readmeFile.Size() >= setting.UI.MaxDisplayFileSize {
|
||||
// Pretend that this is a normal text file to display 'This file is too large to be shown'
|
||||
ctx.Data["IsFileTooLarge"] = true
|
||||
ctx.Data["IsTextFile"] = true
|
||||
ctx.Data["FileSize"] = readmeFile.Size()
|
||||
} else {
|
||||
ctx.Data["IsRenderedHTML"] = true
|
||||
ctx.Data["FileContent"] = string(bytes.Replace(buf, []byte("\n"), []byte(`<br>`), -1))
|
||||
d, _ := ioutil.ReadAll(dataRc)
|
||||
buf = append(buf, d...)
|
||||
if markup.Type(readmeFile.Name()) != "" {
|
||||
ctx.Data["IsMarkup"] = true
|
||||
ctx.Data["FileContent"] = string(markup.Render(readmeFile.Name(), buf, treeLink, ctx.Repo.Repository.ComposeMetas()))
|
||||
} else {
|
||||
ctx.Data["IsRenderedHTML"] = true
|
||||
ctx.Data["FileContent"] = string(bytes.Replace(buf, []byte("\n"), []byte(`<br>`), -1))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -135,11 +143,12 @@ func renderFile(ctx *context.Context, entry *git.TreeEntry, treeLink, rawLink st
|
|||
ctx.Data["IsViewFile"] = true
|
||||
|
||||
blob := entry.Blob()
|
||||
dataRc, err := blob.Data()
|
||||
dataRc, err := blob.DataAsync()
|
||||
if err != nil {
|
||||
ctx.Handle(500, "Data", err)
|
||||
ctx.Handle(500, "DataAsync", err)
|
||||
return
|
||||
}
|
||||
defer dataRc.Close()
|
||||
|
||||
ctx.Data["FileSize"] = blob.Size()
|
||||
ctx.Data["FileName"] = blob.Name()
|
||||
|
|
|
@ -223,7 +223,9 @@ func SettingsSecurityPost(ctx *context.Context, form auth.ChangePasswordForm) {
|
|||
return
|
||||
}
|
||||
|
||||
if ctx.User.IsPasswordSet() && !ctx.User.ValidatePassword(form.OldPassword) {
|
||||
if len(form.Password) < setting.MinPasswordLength {
|
||||
ctx.Flash.Error(ctx.Tr("auth.password_too_short", setting.MinPasswordLength))
|
||||
} else if ctx.User.IsPasswordSet() && !ctx.User.ValidatePassword(form.OldPassword) {
|
||||
ctx.Flash.Error(ctx.Tr("settings.password_incorrect"))
|
||||
} else if form.Password != form.Retype {
|
||||
ctx.Flash.Error(ctx.Tr("form.password_not_match"))
|
||||
|
|
|
@ -0,0 +1,56 @@
|
|||
// Copyright 2017 The Gitea Authors. All rights reserved.
|
||||
// Use of this source code is governed by a MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package user
|
||||
|
||||
/*func TestChangePassword(t *testing.T) {
|
||||
oldPassword := "password"
|
||||
setting.MinPasswordLength = 6
|
||||
|
||||
for _, req := range []struct {
|
||||
OldPassword string
|
||||
NewPassword string
|
||||
Retype string
|
||||
Message string
|
||||
}{
|
||||
{
|
||||
OldPassword: oldPassword,
|
||||
NewPassword: "123456",
|
||||
Retype: "123456",
|
||||
Message: "",
|
||||
},
|
||||
{
|
||||
OldPassword: oldPassword,
|
||||
NewPassword: "12345",
|
||||
Retype: "12345",
|
||||
Message: "auth.password_too_short",
|
||||
},
|
||||
{
|
||||
OldPassword: "12334",
|
||||
NewPassword: "123456",
|
||||
Retype: "123456",
|
||||
Message: "settings.password_incorrect",
|
||||
},
|
||||
{
|
||||
OldPassword: oldPassword,
|
||||
NewPassword: "123456",
|
||||
Retype: "12345",
|
||||
Message: "form.password_not_match",
|
||||
},
|
||||
} {
|
||||
models.PrepareTestEnv(t)
|
||||
ctx := test.MockContext(t, "user/settings/security")
|
||||
test.LoadUser(t, ctx, 2)
|
||||
test.LoadRepo(t, ctx, 1)
|
||||
|
||||
SettingsSecurityPost(ctx, auth.ChangePasswordForm{
|
||||
OldPassword: req.OldPassword,
|
||||
Password: req.NewPassword,
|
||||
Retype: req.Retype,
|
||||
})
|
||||
|
||||
assert.EqualValues(t, req.Message, ctx.Flash.ErrorMsg)
|
||||
assert.EqualValues(t, http.StatusFound, ctx.Resp.Status())
|
||||
}
|
||||
}*/
|
|
@ -0,0 +1,17 @@
|
|||
// Copyright 2017 The Gitea Authors. All rights reserved.
|
||||
// Use of this source code is governed by a MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package utils
|
||||
|
||||
import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
// RemoveUsernameParameterSuffix returns the username parameter without the (fullname) suffix - leaving just the username
|
||||
func RemoveUsernameParameterSuffix(name string) string {
|
||||
if index := strings.Index(name, " ("); index >= 0 {
|
||||
name = name[:index]
|
||||
}
|
||||
return name
|
||||
}
|
|
@ -0,0 +1,17 @@
|
|||
// Copyright 2017 The Gitea Authors. All rights reserved.
|
||||
// Use of this source code is governed by a MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package utils
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestRemoveUsernameParameterSuffix(t *testing.T) {
|
||||
assert.Equal(t, "foobar", RemoveUsernameParameterSuffix("foobar (Foo Bar)"))
|
||||
assert.Equal(t, "foobar", RemoveUsernameParameterSuffix("foobar"))
|
||||
assert.Equal(t, "", RemoveUsernameParameterSuffix(""))
|
||||
}
|
|
@ -99,7 +99,7 @@
|
|||
<dt>{{.i18n.Tr "admin.config.db_user"}}</dt>
|
||||
<dd>{{if .DbCfg.User}}{{.DbCfg.User}}{{else}}-{{end}}</dd>
|
||||
<dt>{{.i18n.Tr "admin.config.db_ssl_mode"}}</dt>
|
||||
<dd><i class="fa fa{{if .DbCfg.SSLMode}}-check{{end}}-square-o"></i> {{.i18n.Tr "admin.config.db_ssl_mode_helper"}}</dd>
|
||||
<dd>{{if .DbCfg.SSLMode}}{{.DbCfg.SSLMode}}{{else}}-{{end}} {{.i18n.Tr "admin.config.db_ssl_mode_helper"}}</dd>
|
||||
<dt>{{.i18n.Tr "admin.config.db_path"}}</dt>
|
||||
<dd>{{if .DbCfg.Path}}{{.DbCfg.Path}}{{else}}-{{end}} {{.i18n.Tr "admin.config.db_path_helper"}}</dd>
|
||||
</dl>
|
||||
|
@ -264,8 +264,6 @@
|
|||
<dd>{{.Git.MaxGitDiffFiles}}</dd>
|
||||
<dt>{{.i18n.Tr "admin.config.git_gc_args"}}</dt>
|
||||
<dd><code>{{.Git.GCArgs}}</code></dd>
|
||||
<dt>{{.i18n.Tr "admin.config.git_max_diff_lines"}}</dt>
|
||||
<dd>{{.Git.MaxGitDiffLines}}</dd>
|
||||
<div class="ui divider"></div>
|
||||
<dt>{{.i18n.Tr "admin.config.git_migrate_timeout"}}</dt>
|
||||
<dd>{{.Git.Timeout.Migrate}} {{.i18n.Tr "tool.raw_seconds"}}</dd>
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
{{range .Repos}}
|
||||
<div class="item">
|
||||
<div class="ui header">
|
||||
<a class="name" href="{{AppSubUrl}}/{{if .Owner}}{{.Owner.Name}}{{else if $.Org}}{{$.Org.Name}}{{else}}{{$.Owner.Name}}{{end}}/{{.Name}}">{{if or $.PageIsExplore $.PageIsProfileStarList }}{{if .Owner}}{{.Owner.Name}} / {{end}}{{end}}{{.Name}}</a>
|
||||
<a class="name" href="{{.HTMLURL}}">{{if or $.PageIsExplore $.PageIsProfileStarList }}{{if .Owner}}{{.Owner.Name}} / {{end}}{{end}}{{.Name}}</a>
|
||||
{{if .IsPrivate}}
|
||||
<span class="text gold"><i class="octicon octicon-lock"></i></span>
|
||||
{{else if .IsFork}}
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
<div class="ui vertically grid head">
|
||||
<div class="column">
|
||||
<div class="ui header">
|
||||
<img class="ui image" src="{{.RelAvatarLink}}?s=100">
|
||||
<img class="ui image" src="{{.SizedRelAvatarLink 100}}">
|
||||
<span class="text thin grey"><a href="{{.HomeLink}}">{{.DisplayName}}</a></span>
|
||||
|
||||
<div class="ui right">
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
<div class="ui container">
|
||||
<div class="ui grid">
|
||||
<div class="ui sixteen wide column">
|
||||
<img class="ui left" id="org-avatar" src="{{.Org.RelAvatarLink}}?s=140"/>
|
||||
<img class="ui left" id="org-avatar" src="{{.Org.SizedRelAvatarLink 140}}"/>
|
||||
<div id="org-info">
|
||||
<div class="ui header">
|
||||
{{.Org.DisplayName}}
|
||||
|
|
|
@ -8,7 +8,7 @@
|
|||
{{range .Members}}
|
||||
<div class="item ui grid">
|
||||
<div class="ui one wide column">
|
||||
<img class="ui avatar" src="{{.RelAvatarLink}}?s=48">
|
||||
<img class="ui avatar" src="{{.SizedRelAvatarLink 48}}">
|
||||
</div>
|
||||
<div class="ui three wide column">
|
||||
<div class="meta"><a href="{{.HomeLink}}">{{.Name}}</a></div>
|
||||
|
|
|
@ -62,7 +62,7 @@
|
|||
{{end}}
|
||||
|
||||
{{if .Repository.UnitEnabled $.UnitTypeExternalTracker}}
|
||||
<a class="{{if .PageIsIssueList}}active{{end}} item" href="{{.RepoLink}}/issues">
|
||||
<a class="{{if .PageIsIssueList}}active{{end}} item" href="{{.RepoLink}}/issues" target="_blank">
|
||||
<i class="octicon octicon-issue-opened"></i> {{.i18n.Tr "repo.issues"}} </span>
|
||||
</a>
|
||||
{{end}}
|
||||
|
|
|
@ -35,7 +35,7 @@
|
|||
<div class="menu">
|
||||
<a class="item" href="{{$.Link}}?q={{$.Keyword}}&type={{$.ViewType}}&sort={{$.SortType}}&state={{$.State}}&milestone={{$.MilestoneID}}&assignee={{$.AssigneeID}}">{{.i18n.Tr "repo.issues.filter_label_no_select"}}</a>
|
||||
{{range .Labels}}
|
||||
<a class="item" href="{{$.Link}}?q={{$.Keyword}}&type={{$.ViewType}}&sort={{$.SortType}}&state={{$.State}}&labels={{.ID}}&milestone={{$.MilestoneID}}&assignee={{$.AssigneeID}}"><span class="octicon {{if eq $.SelectLabels .ID}}octicon-check{{end}}"></span><span class="label color" style="background-color: {{.Color}}"></span> {{.Name | Sanitize}}</a>
|
||||
<a class="item" href="{{$.Link}}?q={{$.Keyword}}&type={{$.ViewType}}&sort={{$.SortType}}&state={{$.State}}&labels={{.ID}}&milestone={{$.MilestoneID}}&assignee={{$.AssigneeID}}"><span class="octicon {{if eq $.SelectLabels .ID}}octicon-check{{end}}"></span><span class="label color" style="background-color: {{.Color}}"></span> {{.Name}}</a>
|
||||
{{end}}
|
||||
</div>
|
||||
</div>
|
||||
|
@ -49,7 +49,7 @@
|
|||
<div class="menu">
|
||||
<a class="item" href="{{$.Link}}?q={{$.Keyword}}&type={{$.ViewType}}&sort={{$.SortType}}&state={{$.State}}&labels={{.SelectLabels}}&assignee={{$.AssigneeID}}">{{.i18n.Tr "repo.issues.filter_milestone_no_select"}}</a>
|
||||
{{range .Milestones}}
|
||||
<a class="{{if eq $.MilestoneID .ID}}active selected{{end}} item" href="{{$.Link}}?type={{$.ViewType}}&sort={{$.SortType}}&state={{$.State}}&labels={{$.SelectLabels}}&milestone={{.ID}}&assignee={{$.AssigneeID}}">{{.Name | Sanitize}}</a>
|
||||
<a class="{{if eq $.MilestoneID .ID}}active selected{{end}} item" href="{{$.Link}}?type={{$.ViewType}}&sort={{$.SortType}}&state={{$.State}}&labels={{$.SelectLabels}}&milestone={{.ID}}&assignee={{$.AssigneeID}}">{{.Name}}</a>
|
||||
{{end}}
|
||||
</div>
|
||||
</div>
|
||||
|
@ -117,7 +117,7 @@
|
|||
<div class="menu">
|
||||
{{range .Labels}}
|
||||
<div class="item issue-action" data-action="toggle" data-element-id="{{.ID}}" data-url="{{$.RepoLink}}/issues/labels">
|
||||
<span class="octicon {{if eq $.SelectLabels .ID}}octicon-check{{end}}"></span><span class="label color" style="background-color: {{.Color}}"></span> {{.Name | Sanitize}}
|
||||
<span class="octicon {{if eq $.SelectLabels .ID}}octicon-check{{end}}"></span><span class="label color" style="background-color: {{.Color}}"></span> {{.Name}}
|
||||
</div>
|
||||
{{end}}
|
||||
</div>
|
||||
|
@ -135,7 +135,7 @@
|
|||
</div>
|
||||
{{range .Milestones}}
|
||||
<div class="item issue-action" data-element-id="{{.ID}}" data-url="{{$.RepoLink}}/issues/milestone">
|
||||
{{.Name | Sanitize}}
|
||||
{{.Name}}
|
||||
</div>
|
||||
{{end}}
|
||||
</div>
|
||||
|
@ -172,10 +172,10 @@
|
|||
<a class="title has-emoji" href="{{$.Link}}/{{.Index}}">{{.Title}}</a>
|
||||
|
||||
{{if .Ref}}
|
||||
<a class="ui label" href="{{$.RepoLink}}/src/commit/{{.Ref}}">{{.Ref}}</a>
|
||||
<a class="ui label" href="{{$.RepoLink}}/src/branch/{{.Ref}}">{{.Ref}}</a>
|
||||
{{end}}
|
||||
{{range .Labels}}
|
||||
<a class="ui label" href="{{$.Link}}?q={{$.Keyword}}&type={{$.ViewType}}&state={{$.State}}&labels={{.ID}}&milestone={{$.MilestoneID}}&assignee={{$.AssigneeID}}" style="color: {{.ForegroundColor}}; background-color: {{.Color}}">{{.Name | Sanitize}}</a>
|
||||
<a class="ui label" href="{{$.Link}}?q={{$.Keyword}}&type={{$.ViewType}}&state={{$.State}}&labels={{.ID}}&milestone={{$.MilestoneID}}&assignee={{$.AssigneeID}}" style="color: {{.ForegroundColor}}; background-color: {{.Color}}">{{.Name}}</a>
|
||||
{{end}}
|
||||
|
||||
{{if .NumComments}}
|
||||
|
@ -186,7 +186,7 @@
|
|||
{{$.i18n.Tr "repo.issues.opened_by" $timeStr .Poster.HomeLink .Poster.Name | Safe}}
|
||||
{{if .Milestone}}
|
||||
<a class="milestone" href="{{$.Link}}?q={{$.Keyword}}&type={{$.ViewType}}&state={{$.State}}&labels={{$.SelectLabels}}&milestone={{.Milestone.ID}}&assignee={{$.AssigneeID}}">
|
||||
<span class="octicon octicon-milestone"></span> {{.Milestone.Name | Sanitize}}
|
||||
<span class="octicon octicon-milestone"></span> {{.Milestone.Name}}
|
||||
</a>
|
||||
{{end}}
|
||||
{{if .Assignee}}
|
||||
|
|
|
@ -43,7 +43,7 @@
|
|||
<div class="milestone list">
|
||||
{{range .Milestones}}
|
||||
<li class="item">
|
||||
<i class="octicon octicon-milestone"></i> <a href="{{$.RepoLink}}/issues?state={{$.State}}&milestone={{.ID}}">{{.Name | Sanitize}}</a>
|
||||
<i class="octicon octicon-milestone"></i> <a href="{{$.RepoLink}}/issues?state={{$.State}}&milestone={{.ID}}">{{.Name}}</a>
|
||||
<div class="ui right green progress" data-percent="{{.Completeness}}">
|
||||
<div class="bar" {{if not .Completeness}}style="background-color: transparent"{{end}}>
|
||||
<div class="progress"></div>
|
||||
|
|
|
@ -96,7 +96,7 @@
|
|||
<img src="{{.Poster.RelAvatarLink}}">
|
||||
</a>
|
||||
<span class="text grey"><a href="{{.Poster.HomeLink}}">{{.Poster.Name}}</a>
|
||||
{{if .Content}}{{$.i18n.Tr "repo.issues.add_label_at" .Label.ForegroundColor .Label.Color .Label.Name $createdStr | UnescapeLocale | Safe}}{{else}}{{$.i18n.Tr "repo.issues.remove_label_at" .Label.ForegroundColor .Label.Color .Label.Name $createdStr | UnescapeLocale | Safe}}{{end}}</span>
|
||||
{{if .Content}}{{$.i18n.Tr "repo.issues.add_label_at" .Label.ForegroundColor .Label.Color (.Label.Name|Escape) $createdStr | UnescapeLocale | Safe}}{{else}}{{$.i18n.Tr "repo.issues.remove_label_at" .Label.ForegroundColor .Label.Color (.Label.Name|Escape) $createdStr | UnescapeLocale | Safe}}{{end}}</span>
|
||||
</div>
|
||||
{{end}}
|
||||
{{else if eq .Type 8}}
|
||||
|
@ -106,7 +106,7 @@
|
|||
<img src="{{.Poster.RelAvatarLink}}">
|
||||
</a>
|
||||
<span class="text grey"><a href="{{.Poster.HomeLink}}">{{.Poster.Name}}</a>
|
||||
{{if gt .OldMilestoneID 0}}{{if gt .MilestoneID 0}}{{$.i18n.Tr "repo.issues.change_milestone_at" .OldMilestone.Name .Milestone.Name $createdStr | Safe}}{{else}}{{$.i18n.Tr "repo.issues.remove_milestone_at" .OldMilestone.Name $createdStr | Safe}}{{end}}{{else if gt .MilestoneID 0}}{{$.i18n.Tr "repo.issues.add_milestone_at" .Milestone.Name $createdStr | Safe}}{{end}}</span>
|
||||
{{if gt .OldMilestoneID 0}}{{if gt .MilestoneID 0}}{{$.i18n.Tr "repo.issues.change_milestone_at" (.OldMilestone.Name|Escape) (.Milestone.Name|Escape) $createdStr | Safe}}{{else}}{{$.i18n.Tr "repo.issues.remove_milestone_at" (.OldMilestone.Name|Escape) $createdStr | Safe}}{{end}}{{else if gt .MilestoneID 0}}{{$.i18n.Tr "repo.issues.add_milestone_at" (.Milestone.Name|Escape) $createdStr | Safe}}{{end}}</span>
|
||||
</div>
|
||||
{{else if eq .Type 9}}
|
||||
<div class="event">
|
||||
|
@ -124,23 +124,23 @@
|
|||
{{else if eq .Type 10}}
|
||||
<div class="event">
|
||||
<span class="octicon octicon-primitive-dot"></span>
|
||||
<a class="ui avatar image" href="{{.Poster.HomeLink}}">
|
||||
<img src="{{.Poster.RelAvatarLink}}">
|
||||
</a>
|
||||
<span class="text grey"><a href="{{.Poster.HomeLink}}">{{.Poster.Name}}</a>
|
||||
{{$.i18n.Tr "repo.issues.change_title_at" (.OldTitle|Escape) (.NewTitle|Escape) $createdStr | Safe}}
|
||||
</span>
|
||||
</div>
|
||||
<a class="ui avatar image" href="{{.Poster.HomeLink}}">
|
||||
<img src="{{.Poster.RelAvatarLink}}">
|
||||
</a>
|
||||
<span class="text grey"><a href="{{.Poster.HomeLink}}">{{.Poster.Name}}</a>
|
||||
{{$.i18n.Tr "repo.issues.change_title_at" .OldTitle .NewTitle $createdStr | Safe}}
|
||||
</span>
|
||||
{{else if eq .Type 11}}
|
||||
<div class="event">
|
||||
<span class="octicon octicon-primitive-dot"></span>
|
||||
<a class="ui avatar image" href="{{.Poster.HomeLink}}">
|
||||
<img src="{{.Poster.RelAvatarLink}}">
|
||||
</a>
|
||||
<span class="text grey"><a href="{{.Poster.HomeLink}}">{{.Poster.Name}}</a>
|
||||
{{$.i18n.Tr "repo.issues.delete_branch_at" .CommitSHA $createdStr | Safe}}
|
||||
</span>
|
||||
</div>
|
||||
<a class="ui avatar image" href="{{.Poster.HomeLink}}">
|
||||
<img src="{{.Poster.RelAvatarLink}}">
|
||||
</a>
|
||||
<span class="text grey"><a href="{{.Poster.HomeLink}}">{{.Poster.Name}}</a>
|
||||
{{$.i18n.Tr "repo.issues.delete_branch_at" .CommitSHA $createdStr | Safe}}
|
||||
</span>
|
||||
{{else if eq .Type 12}}
|
||||
<div class="event">
|
||||
<span class="octicon octicon-primitive-dot"></span>
|
||||
|
|
|
@ -18,7 +18,7 @@
|
|||
<span class="no-select item {{if .HasSelectedLabel}}hide{{end}}">{{.i18n.Tr "repo.issues.new.no_label"}}</span>
|
||||
{{range .Labels}}
|
||||
<div class="item">
|
||||
<a class="ui label {{if not .IsChecked}}hide{{end}}" id="label_{{.ID}}" href="{{$.RepoLink}}/issues?labels={{.ID}}" style="color: {{.ForegroundColor}}; background-color: {{.Color}}">{{.Name | Sanitize}}</a>
|
||||
<a class="ui label {{if not .IsChecked}}hide{{end}}" id="label_{{.ID}}" href="{{$.RepoLink}}/issues?labels={{.ID}}" style="color: {{.ForegroundColor}}; background-color: {{.Color}}">{{.Name}}</a>
|
||||
</div>
|
||||
|
||||
{{end}}
|
||||
|
@ -40,7 +40,7 @@
|
|||
{{.i18n.Tr "repo.issues.new.open_milestone"}}
|
||||
</div>
|
||||
{{range .OpenMilestones}}
|
||||
<div class="item" data-id="{{.ID}}" data-href="{{$.RepoLink}}/issues?milestone={{.ID}}"> {{.Name | Sanitize}}</div>
|
||||
<div class="item" data-id="{{.ID}}" data-href="{{$.RepoLink}}/issues?milestone={{.ID}}"> {{.Name}}</div>
|
||||
{{end}}
|
||||
{{end}}
|
||||
{{if .ClosedMilestones}}
|
||||
|
@ -50,7 +50,7 @@
|
|||
{{.i18n.Tr "repo.issues.new.closed_milestone"}}
|
||||
</div>
|
||||
{{range .ClosedMilestones}}
|
||||
<a class="item" data-id="{{.ID}}" data-href="{{$.RepoLink}}/issues?milestone={{.ID}}"> {{.Name | Sanitize}}</a>
|
||||
<a class="item" data-id="{{.ID}}" data-href="{{$.RepoLink}}/issues?milestone={{.ID}}"> {{.Name}}</a>
|
||||
{{end}}
|
||||
{{end}}
|
||||
</div>
|
||||
|
@ -59,7 +59,7 @@
|
|||
<span class="no-select item {{if .Issue.Milestone}}hide{{end}}">{{.i18n.Tr "repo.issues.new.no_milestone"}}</span>
|
||||
<div class="selected">
|
||||
{{if .Issue.Milestone}}
|
||||
<a class="item" href="{{.RepoLink}}/issues?milestone={{.Issue.Milestone.ID}}"> {{.Issue.Milestone.Name | Sanitize}}</a>
|
||||
<a class="item" href="{{.RepoLink}}/issues?milestone={{.Issue.Milestone.ID}}"> {{.Issue.Milestone.Name}}</a>
|
||||
{{end}}
|
||||
</div>
|
||||
</div>
|
||||
|
|
|
@ -41,9 +41,11 @@
|
|||
<a href="{{$.RepoLink}}/src/tag/{{.TagName}}" rel="nofollow"><i class="tag icon"></i> {{.TagName}}</a>
|
||||
</h4>
|
||||
<div class="download">
|
||||
{{if $.Repository.UnitEnabled $.UnitTypeCode}}
|
||||
<a href="{{$.RepoLink}}/src/commit/{{.Sha1}}" rel="nofollow"><i class="code icon"></i> {{ShortSha .Sha1}}</a>
|
||||
<a href="{{$.RepoLink}}/archive/{{.TagName}}.zip" rel="nofollow"><i class="octicon octicon-file-zip"></i> ZIP</a>
|
||||
<a href="{{$.RepoLink}}/archive/{{.TagName}}.tar.gz"><i class="octicon octicon-file-zip"></i> TAR.GZ</a>
|
||||
{{end}}
|
||||
</div>
|
||||
{{else}}
|
||||
<h3>
|
||||
|
@ -64,12 +66,14 @@
|
|||
<div class="download">
|
||||
<h2>{{$.i18n.Tr "repo.release.downloads"}}</h2>
|
||||
<ul class="list">
|
||||
{{if $.Repository.UnitEnabled $.UnitTypeCode}}
|
||||
<li>
|
||||
<a href="{{$.RepoLink}}/archive/{{.TagName}}.zip" rel="nofollow"><i class="octicon octicon-file-zip"></i> {{$.i18n.Tr "repo.release.source_code"}} (ZIP)</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="{{$.RepoLink}}/archive/{{.TagName}}.tar.gz"><i class="octicon octicon-file-zip"></i> {{$.i18n.Tr "repo.release.source_code"}} (TAR.GZ)</a>
|
||||
</li>
|
||||
{{end}}
|
||||
{{if .Attachments}}
|
||||
{{range .Attachments}}
|
||||
<li>
|
||||
|
|
|
@ -14,7 +14,7 @@
|
|||
</div>
|
||||
{{if .Keyword}}
|
||||
<h3>
|
||||
{{.i18n.Tr "repo.search.results" .Keyword .RepoLink .RepoName | Str2html}}
|
||||
{{.i18n.Tr "repo.search.results" (.Keyword|Escape) .RepoLink .RepoName | Str2html }}
|
||||
</h3>
|
||||
<div class="repository search">
|
||||
{{range $result := .SearchResults}}
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
{{template "base/head" .}}
|
||||
<div class="repository wiki view">
|
||||
{{template "repo/header" .}}
|
||||
{{ $title := .title | Sanitize}}
|
||||
{{ $title := .title}}
|
||||
<div class="ui container">
|
||||
<div class="ui grid">
|
||||
<div class="ui ten wide column">
|
||||
|
@ -21,7 +21,7 @@
|
|||
</div>
|
||||
<div class="scrolling menu">
|
||||
{{range .Pages}}
|
||||
<div class="item {{if eq $.Title .Name}}selected{{end}}" data-url="{{$.RepoLink}}/wiki/{{.URL}}">{{.Name | Sanitize}}</div>
|
||||
<div class="item {{if eq $.Title .Name}}selected{{end}}" data-url="{{$.RepoLink}}/wiki/{{.URL}}">{{.Name}}</div>
|
||||
{{end}}
|
||||
</div>
|
||||
</div>
|
||||
|
@ -98,7 +98,7 @@
|
|||
{{.i18n.Tr "repo.wiki.delete_page_button"}}
|
||||
</div>
|
||||
<div class="content">
|
||||
<p>{{.i18n.Tr "repo.wiki.delete_page_notice_1" $title | Safe}}</p>
|
||||
<p>{{.i18n.Tr "repo.wiki.delete_page_notice_1" ($title|Escape) | Safe}}</p>
|
||||
</div>
|
||||
{{template "base/delete_modal_actions" .}}
|
||||
</div>
|
||||
|
|
|
@ -6,11 +6,11 @@
|
|||
<div class="ui card">
|
||||
{{if eq .SignedUserName .Owner.Name}}
|
||||
<a class="image poping up" href="{{AppSubUrl}}/user/settings/avatar" id="profile-avatar" data-content="{{.i18n.Tr "user.change_avatar"}}" data-variation="inverted tiny" data-position="bottom center">
|
||||
<img src="{{.Owner.RelAvatarLink}}?s=290" title="{{.Owner.Name}}"/>
|
||||
<img src="{{.Owner.SizedRelAvatarLink 290}}" title="{{.Owner.Name}}"/>
|
||||
</a>
|
||||
{{else}}
|
||||
<span class="image">
|
||||
<img src="{{.Owner.RelAvatarLink}}?s=290" title="{{.Owner.Name}}"/>
|
||||
<img src="{{.Owner.SizedRelAvatarLink 290}}" title="{{.Owner.Name}}"/>
|
||||
</span>
|
||||
{{end}}
|
||||
<div class="content">
|
||||
|
|
|
@ -6,7 +6,11 @@ package git
|
|||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"os/exec"
|
||||
)
|
||||
|
||||
// Blob represents a Git object.
|
||||
|
@ -18,14 +22,52 @@ type Blob struct {
|
|||
// Data gets content of blob all at once and wrap it as io.Reader.
|
||||
// This can be very slow and memory consuming for huge content.
|
||||
func (b *Blob) Data() (io.Reader, error) {
|
||||
stdout, err := NewCommand("show", b.ID.String()).RunInDirBytes(b.repo.Path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
stdout := new(bytes.Buffer)
|
||||
stderr := new(bytes.Buffer)
|
||||
|
||||
// Preallocate memory to save ~50% memory usage on big files.
|
||||
stdout.Grow(int(b.Size() + 2048))
|
||||
|
||||
if err := b.DataPipeline(stdout, stderr); err != nil {
|
||||
return nil, concatenateError(err, stderr.String())
|
||||
}
|
||||
return bytes.NewBuffer(stdout), nil
|
||||
return stdout, nil
|
||||
}
|
||||
|
||||
// DataPipeline gets content of blob and write the result or error to stdout or stderr
|
||||
func (b *Blob) DataPipeline(stdout, stderr io.Writer) error {
|
||||
return NewCommand("show", b.ID.String()).RunInDirPipeline(b.repo.Path, stdout, stderr)
|
||||
}
|
||||
|
||||
type cmdReadCloser struct {
|
||||
cmd *exec.Cmd
|
||||
stdout io.Reader
|
||||
}
|
||||
|
||||
func (c cmdReadCloser) Read(p []byte) (int, error) {
|
||||
return c.stdout.Read(p)
|
||||
}
|
||||
|
||||
func (c cmdReadCloser) Close() error {
|
||||
io.Copy(ioutil.Discard, c.stdout)
|
||||
return c.cmd.Wait()
|
||||
}
|
||||
|
||||
// DataAsync gets a ReadCloser for the contents of a blob without reading it all.
|
||||
// Calling the Close function on the result will discard all unread output.
|
||||
func (b *Blob) DataAsync() (io.ReadCloser, error) {
|
||||
cmd := exec.Command("git", "show", b.ID.String())
|
||||
cmd.Dir = b.repo.Path
|
||||
cmd.Stderr = os.Stderr
|
||||
|
||||
stdout, err := cmd.StdoutPipe()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("StdoutPipe: %v", err)
|
||||
}
|
||||
|
||||
if err = cmd.Start(); err != nil {
|
||||
return nil, fmt.Errorf("Start: %v", err)
|
||||
}
|
||||
|
||||
return cmdReadCloser{stdout: stdout, cmd: cmd}, nil
|
||||
}
|
||||
|
|
|
@ -98,10 +98,11 @@ func (c *Commit) IsImageFile(name string) bool {
|
|||
return false
|
||||
}
|
||||
|
||||
dataRc, err := blob.Data()
|
||||
dataRc, err := blob.DataAsync()
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
defer dataRc.Close()
|
||||
buf := make([]byte, 1024)
|
||||
n, _ := dataRc.Read(buf)
|
||||
buf = buf[:n]
|
||||
|
|
|
@ -25,7 +25,7 @@ var (
|
|||
// Prefix the log prefix
|
||||
Prefix = "[git-module] "
|
||||
// GitVersionRequired is the minimum Git version required
|
||||
GitVersionRequired = "1.8.1.6"
|
||||
GitVersionRequired = "1.7.2"
|
||||
)
|
||||
|
||||
func log(format string, args ...interface{}) {
|
||||
|
|
|
@ -0,0 +1,8 @@
|
|||
// Copyright 2012 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// This code was translated into a form compatible with 6a from the public
|
||||
// domain sources in SUPERCOP: https://bench.cr.yp.to/supercop.html
|
||||
|
||||
#define REDMASK51 0x0007FFFFFFFFFFFF
|
|
@ -3,12 +3,12 @@
|
|||
// license that can be found in the LICENSE file.
|
||||
|
||||
// This code was translated into a form compatible with 6a from the public
|
||||
// domain sources in SUPERCOP: http://bench.cr.yp.to/supercop.html
|
||||
// domain sources in SUPERCOP: https://bench.cr.yp.to/supercop.html
|
||||
|
||||
// +build amd64,!gccgo,!appengine
|
||||
|
||||
DATA ·REDMASK51(SB)/8, $0x0007FFFFFFFFFFFF
|
||||
GLOBL ·REDMASK51(SB), 8, $8
|
||||
// These constants cannot be encoded in non-MOVQ immediates.
|
||||
// We access them directly from memory instead.
|
||||
|
||||
DATA ·_121666_213(SB)/8, $996687872
|
||||
GLOBL ·_121666_213(SB), 8, $8
|
||||
|
|
|
@ -2,87 +2,64 @@
|
|||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// This code was translated into a form compatible with 6a from the public
|
||||
// domain sources in SUPERCOP: http://bench.cr.yp.to/supercop.html
|
||||
|
||||
// +build amd64,!gccgo,!appengine
|
||||
|
||||
// func cswap(inout *[5]uint64, v uint64)
|
||||
// func cswap(inout *[4][5]uint64, v uint64)
|
||||
TEXT ·cswap(SB),7,$0
|
||||
MOVQ inout+0(FP),DI
|
||||
MOVQ v+8(FP),SI
|
||||
|
||||
CMPQ SI,$1
|
||||
MOVQ 0(DI),SI
|
||||
MOVQ 80(DI),DX
|
||||
MOVQ 8(DI),CX
|
||||
MOVQ 88(DI),R8
|
||||
MOVQ SI,R9
|
||||
CMOVQEQ DX,SI
|
||||
CMOVQEQ R9,DX
|
||||
MOVQ CX,R9
|
||||
CMOVQEQ R8,CX
|
||||
CMOVQEQ R9,R8
|
||||
MOVQ SI,0(DI)
|
||||
MOVQ DX,80(DI)
|
||||
MOVQ CX,8(DI)
|
||||
MOVQ R8,88(DI)
|
||||
MOVQ 16(DI),SI
|
||||
MOVQ 96(DI),DX
|
||||
MOVQ 24(DI),CX
|
||||
MOVQ 104(DI),R8
|
||||
MOVQ SI,R9
|
||||
CMOVQEQ DX,SI
|
||||
CMOVQEQ R9,DX
|
||||
MOVQ CX,R9
|
||||
CMOVQEQ R8,CX
|
||||
CMOVQEQ R9,R8
|
||||
MOVQ SI,16(DI)
|
||||
MOVQ DX,96(DI)
|
||||
MOVQ CX,24(DI)
|
||||
MOVQ R8,104(DI)
|
||||
MOVQ 32(DI),SI
|
||||
MOVQ 112(DI),DX
|
||||
MOVQ 40(DI),CX
|
||||
MOVQ 120(DI),R8
|
||||
MOVQ SI,R9
|
||||
CMOVQEQ DX,SI
|
||||
CMOVQEQ R9,DX
|
||||
MOVQ CX,R9
|
||||
CMOVQEQ R8,CX
|
||||
CMOVQEQ R9,R8
|
||||
MOVQ SI,32(DI)
|
||||
MOVQ DX,112(DI)
|
||||
MOVQ CX,40(DI)
|
||||
MOVQ R8,120(DI)
|
||||
MOVQ 48(DI),SI
|
||||
MOVQ 128(DI),DX
|
||||
MOVQ 56(DI),CX
|
||||
MOVQ 136(DI),R8
|
||||
MOVQ SI,R9
|
||||
CMOVQEQ DX,SI
|
||||
CMOVQEQ R9,DX
|
||||
MOVQ CX,R9
|
||||
CMOVQEQ R8,CX
|
||||
CMOVQEQ R9,R8
|
||||
MOVQ SI,48(DI)
|
||||
MOVQ DX,128(DI)
|
||||
MOVQ CX,56(DI)
|
||||
MOVQ R8,136(DI)
|
||||
MOVQ 64(DI),SI
|
||||
MOVQ 144(DI),DX
|
||||
MOVQ 72(DI),CX
|
||||
MOVQ 152(DI),R8
|
||||
MOVQ SI,R9
|
||||
CMOVQEQ DX,SI
|
||||
CMOVQEQ R9,DX
|
||||
MOVQ CX,R9
|
||||
CMOVQEQ R8,CX
|
||||
CMOVQEQ R9,R8
|
||||
MOVQ SI,64(DI)
|
||||
MOVQ DX,144(DI)
|
||||
MOVQ CX,72(DI)
|
||||
MOVQ R8,152(DI)
|
||||
MOVQ DI,AX
|
||||
MOVQ SI,DX
|
||||
SUBQ $1, SI
|
||||
NOTQ SI
|
||||
MOVQ SI, X15
|
||||
PSHUFD $0x44, X15, X15
|
||||
|
||||
MOVOU 0(DI), X0
|
||||
MOVOU 16(DI), X2
|
||||
MOVOU 32(DI), X4
|
||||
MOVOU 48(DI), X6
|
||||
MOVOU 64(DI), X8
|
||||
MOVOU 80(DI), X1
|
||||
MOVOU 96(DI), X3
|
||||
MOVOU 112(DI), X5
|
||||
MOVOU 128(DI), X7
|
||||
MOVOU 144(DI), X9
|
||||
|
||||
MOVO X1, X10
|
||||
MOVO X3, X11
|
||||
MOVO X5, X12
|
||||
MOVO X7, X13
|
||||
MOVO X9, X14
|
||||
|
||||
PXOR X0, X10
|
||||
PXOR X2, X11
|
||||
PXOR X4, X12
|
||||
PXOR X6, X13
|
||||
PXOR X8, X14
|
||||
PAND X15, X10
|
||||
PAND X15, X11
|
||||
PAND X15, X12
|
||||
PAND X15, X13
|
||||
PAND X15, X14
|
||||
PXOR X10, X0
|
||||
PXOR X10, X1
|
||||
PXOR X11, X2
|
||||
PXOR X11, X3
|
||||
PXOR X12, X4
|
||||
PXOR X12, X5
|
||||
PXOR X13, X6
|
||||
PXOR X13, X7
|
||||
PXOR X14, X8
|
||||
PXOR X14, X9
|
||||
|
||||
MOVOU X0, 0(DI)
|
||||
MOVOU X2, 16(DI)
|
||||
MOVOU X4, 32(DI)
|
||||
MOVOU X6, 48(DI)
|
||||
MOVOU X8, 64(DI)
|
||||
MOVOU X1, 80(DI)
|
||||
MOVOU X3, 96(DI)
|
||||
MOVOU X5, 112(DI)
|
||||
MOVOU X7, 128(DI)
|
||||
MOVOU X9, 144(DI)
|
||||
RET
|
||||
|
|
|
@ -2,12 +2,16 @@
|
|||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// We have a implementation in amd64 assembly so this code is only run on
|
||||
// We have an implementation in amd64 assembly so this code is only run on
|
||||
// non-amd64 platforms. The amd64 assembly does not support gccgo.
|
||||
// +build !amd64 gccgo appengine
|
||||
|
||||
package curve25519
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
)
|
||||
|
||||
// This code is a port of the public domain, "ref10" implementation of
|
||||
// curve25519 from SUPERCOP 20130419 by D. J. Bernstein.
|
||||
|
||||
|
@ -50,17 +54,11 @@ func feCopy(dst, src *fieldElement) {
|
|||
//
|
||||
// Preconditions: b in {0,1}.
|
||||
func feCSwap(f, g *fieldElement, b int32) {
|
||||
var x fieldElement
|
||||
b = -b
|
||||
for i := range x {
|
||||
x[i] = b & (f[i] ^ g[i])
|
||||
}
|
||||
|
||||
for i := range f {
|
||||
f[i] ^= x[i]
|
||||
}
|
||||
for i := range g {
|
||||
g[i] ^= x[i]
|
||||
t := b & (f[i] ^ g[i])
|
||||
f[i] ^= t
|
||||
g[i] ^= t
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -75,12 +73,7 @@ func load3(in []byte) int64 {
|
|||
|
||||
// load4 reads a 32-bit, little-endian value from in.
|
||||
func load4(in []byte) int64 {
|
||||
var r int64
|
||||
r = int64(in[0])
|
||||
r |= int64(in[1]) << 8
|
||||
r |= int64(in[2]) << 16
|
||||
r |= int64(in[3]) << 24
|
||||
return r
|
||||
return int64(binary.LittleEndian.Uint32(in))
|
||||
}
|
||||
|
||||
func feFromBytes(dst *fieldElement, src *[32]byte) {
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package curve25519 provides an implementation of scalar multiplication on
|
||||
// the elliptic curve known as curve25519. See http://cr.yp.to/ecdh.html
|
||||
// the elliptic curve known as curve25519. See https://cr.yp.to/ecdh.html
|
||||
package curve25519 // import "golang.org/x/crypto/curve25519"
|
||||
|
||||
// basePoint is the x coordinate of the generator of the curve.
|
||||
|
|
|
@ -3,10 +3,12 @@
|
|||
// license that can be found in the LICENSE file.
|
||||
|
||||
// This code was translated into a form compatible with 6a from the public
|
||||
// domain sources in SUPERCOP: http://bench.cr.yp.to/supercop.html
|
||||
// domain sources in SUPERCOP: https://bench.cr.yp.to/supercop.html
|
||||
|
||||
// +build amd64,!gccgo,!appengine
|
||||
|
||||
#include "const_amd64.h"
|
||||
|
||||
// func freeze(inout *[5]uint64)
|
||||
TEXT ·freeze(SB),7,$0-8
|
||||
MOVQ inout+0(FP), DI
|
||||
|
@ -16,7 +18,7 @@ TEXT ·freeze(SB),7,$0-8
|
|||
MOVQ 16(DI),CX
|
||||
MOVQ 24(DI),R8
|
||||
MOVQ 32(DI),R9
|
||||
MOVQ ·REDMASK51(SB),AX
|
||||
MOVQ $REDMASK51,AX
|
||||
MOVQ AX,R10
|
||||
SUBQ $18,R10
|
||||
MOVQ $3,R11
|
||||
|
|
|
@ -3,10 +3,12 @@
|
|||
// license that can be found in the LICENSE file.
|
||||
|
||||
// This code was translated into a form compatible with 6a from the public
|
||||
// domain sources in SUPERCOP: http://bench.cr.yp.to/supercop.html
|
||||
// domain sources in SUPERCOP: https://bench.cr.yp.to/supercop.html
|
||||
|
||||
// +build amd64,!gccgo,!appengine
|
||||
|
||||
#include "const_amd64.h"
|
||||
|
||||
// func ladderstep(inout *[5][5]uint64)
|
||||
TEXT ·ladderstep(SB),0,$296-8
|
||||
MOVQ inout+0(FP),DI
|
||||
|
@ -118,7 +120,7 @@ TEXT ·ladderstep(SB),0,$296-8
|
|||
MULQ 72(SP)
|
||||
ADDQ AX,R12
|
||||
ADCQ DX,R13
|
||||
MOVQ ·REDMASK51(SB),DX
|
||||
MOVQ $REDMASK51,DX
|
||||
SHLQ $13,CX:SI
|
||||
ANDQ DX,SI
|
||||
SHLQ $13,R9:R8
|
||||
|
@ -233,7 +235,7 @@ TEXT ·ladderstep(SB),0,$296-8
|
|||
MULQ 32(SP)
|
||||
ADDQ AX,R12
|
||||
ADCQ DX,R13
|
||||
MOVQ ·REDMASK51(SB),DX
|
||||
MOVQ $REDMASK51,DX
|
||||
SHLQ $13,CX:SI
|
||||
ANDQ DX,SI
|
||||
SHLQ $13,R9:R8
|
||||
|
@ -438,7 +440,7 @@ TEXT ·ladderstep(SB),0,$296-8
|
|||
MULQ 72(SP)
|
||||
ADDQ AX,R12
|
||||
ADCQ DX,R13
|
||||
MOVQ ·REDMASK51(SB),DX
|
||||
MOVQ $REDMASK51,DX
|
||||
SHLQ $13,CX:SI
|
||||
ANDQ DX,SI
|
||||
SHLQ $13,R9:R8
|
||||
|
@ -588,7 +590,7 @@ TEXT ·ladderstep(SB),0,$296-8
|
|||
MULQ 32(SP)
|
||||
ADDQ AX,R12
|
||||
ADCQ DX,R13
|
||||
MOVQ ·REDMASK51(SB),DX
|
||||
MOVQ $REDMASK51,DX
|
||||
SHLQ $13,CX:SI
|
||||
ANDQ DX,SI
|
||||
SHLQ $13,R9:R8
|
||||
|
@ -728,7 +730,7 @@ TEXT ·ladderstep(SB),0,$296-8
|
|||
MULQ 152(DI)
|
||||
ADDQ AX,R12
|
||||
ADCQ DX,R13
|
||||
MOVQ ·REDMASK51(SB),DX
|
||||
MOVQ $REDMASK51,DX
|
||||
SHLQ $13,CX:SI
|
||||
ANDQ DX,SI
|
||||
SHLQ $13,R9:R8
|
||||
|
@ -843,7 +845,7 @@ TEXT ·ladderstep(SB),0,$296-8
|
|||
MULQ 192(DI)
|
||||
ADDQ AX,R12
|
||||
ADCQ DX,R13
|
||||
MOVQ ·REDMASK51(SB),DX
|
||||
MOVQ $REDMASK51,DX
|
||||
SHLQ $13,CX:SI
|
||||
ANDQ DX,SI
|
||||
SHLQ $13,R9:R8
|
||||
|
@ -993,7 +995,7 @@ TEXT ·ladderstep(SB),0,$296-8
|
|||
MULQ 32(DI)
|
||||
ADDQ AX,R12
|
||||
ADCQ DX,R13
|
||||
MOVQ ·REDMASK51(SB),DX
|
||||
MOVQ $REDMASK51,DX
|
||||
SHLQ $13,CX:SI
|
||||
ANDQ DX,SI
|
||||
SHLQ $13,R9:R8
|
||||
|
@ -1143,7 +1145,7 @@ TEXT ·ladderstep(SB),0,$296-8
|
|||
MULQ 112(SP)
|
||||
ADDQ AX,R12
|
||||
ADCQ DX,R13
|
||||
MOVQ ·REDMASK51(SB),DX
|
||||
MOVQ $REDMASK51,DX
|
||||
SHLQ $13,CX:SI
|
||||
ANDQ DX,SI
|
||||
SHLQ $13,R9:R8
|
||||
|
@ -1329,7 +1331,7 @@ TEXT ·ladderstep(SB),0,$296-8
|
|||
MULQ 192(SP)
|
||||
ADDQ AX,R12
|
||||
ADCQ DX,R13
|
||||
MOVQ ·REDMASK51(SB),DX
|
||||
MOVQ $REDMASK51,DX
|
||||
SHLQ $13,CX:SI
|
||||
ANDQ DX,SI
|
||||
SHLQ $13,R9:R8
|
||||
|
|
|
@ -3,10 +3,12 @@
|
|||
// license that can be found in the LICENSE file.
|
||||
|
||||
// This code was translated into a form compatible with 6a from the public
|
||||
// domain sources in SUPERCOP: http://bench.cr.yp.to/supercop.html
|
||||
// domain sources in SUPERCOP: https://bench.cr.yp.to/supercop.html
|
||||
|
||||
// +build amd64,!gccgo,!appengine
|
||||
|
||||
#include "const_amd64.h"
|
||||
|
||||
// func mul(dest, a, b *[5]uint64)
|
||||
TEXT ·mul(SB),0,$16-24
|
||||
MOVQ dest+0(FP), DI
|
||||
|
@ -121,7 +123,7 @@ TEXT ·mul(SB),0,$16-24
|
|||
MULQ 32(CX)
|
||||
ADDQ AX,R14
|
||||
ADCQ DX,R15
|
||||
MOVQ ·REDMASK51(SB),SI
|
||||
MOVQ $REDMASK51,SI
|
||||
SHLQ $13,R9:R8
|
||||
ANDQ SI,R8
|
||||
SHLQ $13,R11:R10
|
||||
|
|
|
@ -3,10 +3,12 @@
|
|||
// license that can be found in the LICENSE file.
|
||||
|
||||
// This code was translated into a form compatible with 6a from the public
|
||||
// domain sources in SUPERCOP: http://bench.cr.yp.to/supercop.html
|
||||
// domain sources in SUPERCOP: https://bench.cr.yp.to/supercop.html
|
||||
|
||||
// +build amd64,!gccgo,!appengine
|
||||
|
||||
#include "const_amd64.h"
|
||||
|
||||
// func square(out, in *[5]uint64)
|
||||
TEXT ·square(SB),7,$0-16
|
||||
MOVQ out+0(FP), DI
|
||||
|
@ -84,7 +86,7 @@ TEXT ·square(SB),7,$0-16
|
|||
MULQ 32(SI)
|
||||
ADDQ AX,R13
|
||||
ADCQ DX,R14
|
||||
MOVQ ·REDMASK51(SB),SI
|
||||
MOVQ $REDMASK51,SI
|
||||
SHLQ $13,R8:CX
|
||||
ANDQ SI,CX
|
||||
SHLQ $13,R10:R9
|
||||
|
|
|
@ -3,20 +3,20 @@
|
|||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package ed25519 implements the Ed25519 signature algorithm. See
|
||||
// http://ed25519.cr.yp.to/.
|
||||
// https://ed25519.cr.yp.to/.
|
||||
//
|
||||
// These functions are also compatible with the “Ed25519” function defined in
|
||||
// https://tools.ietf.org/html/draft-irtf-cfrg-eddsa-05.
|
||||
// RFC 8032.
|
||||
package ed25519
|
||||
|
||||
// This code is a port of the public domain, “ref10” implementation of ed25519
|
||||
// from SUPERCOP.
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto"
|
||||
cryptorand "crypto/rand"
|
||||
"crypto/sha512"
|
||||
"crypto/subtle"
|
||||
"errors"
|
||||
"io"
|
||||
"strconv"
|
||||
|
@ -177,5 +177,5 @@ func Verify(publicKey PublicKey, message, sig []byte) bool {
|
|||
|
||||
var checkR [32]byte
|
||||
R.ToBytes(&checkR)
|
||||
return subtle.ConstantTimeCompare(sig[:32], checkR[:]) == 1
|
||||
return bytes.Equal(sig[:32], checkR[:])
|
||||
}
|
||||
|
|
|
@ -51,13 +51,12 @@ func (b *buffer) write(buf []byte) {
|
|||
}
|
||||
|
||||
// eof closes the buffer. Reads from the buffer once all
|
||||
// the data has been consumed will receive os.EOF.
|
||||
func (b *buffer) eof() error {
|
||||
// the data has been consumed will receive io.EOF.
|
||||
func (b *buffer) eof() {
|
||||
b.Cond.L.Lock()
|
||||
b.closed = true
|
||||
b.Cond.Signal()
|
||||
b.Cond.L.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Read reads data from the internal buffer in buf. Reads will block
|
||||
|
|
|
@ -251,10 +251,18 @@ type CertChecker struct {
|
|||
// for user certificates.
|
||||
SupportedCriticalOptions []string
|
||||
|
||||
// IsAuthority should return true if the key is recognized as
|
||||
// an authority. This allows for certificates to be signed by other
|
||||
// certificates.
|
||||
IsAuthority func(auth PublicKey) bool
|
||||
// IsUserAuthority should return true if the key is recognized as an
|
||||
// authority for the given user certificate. This allows for
|
||||
// certificates to be signed by other certificates. This must be set
|
||||
// if this CertChecker will be checking user certificates.
|
||||
IsUserAuthority func(auth PublicKey) bool
|
||||
|
||||
// IsHostAuthority should report whether the key is recognized as
|
||||
// an authority for this host. This allows for certificates to be
|
||||
// signed by other keys, and for those other keys to only be valid
|
||||
// signers for particular hostnames. This must be set if this
|
||||
// CertChecker will be checking host certificates.
|
||||
IsHostAuthority func(auth PublicKey, address string) bool
|
||||
|
||||
// Clock is used for verifying time stamps. If nil, time.Now
|
||||
// is used.
|
||||
|
@ -268,7 +276,7 @@ type CertChecker struct {
|
|||
// HostKeyFallback is called when CertChecker.CheckHostKey encounters a
|
||||
// public key that is not a certificate. It must implement host key
|
||||
// validation or else, if nil, all such keys are rejected.
|
||||
HostKeyFallback func(addr string, remote net.Addr, key PublicKey) error
|
||||
HostKeyFallback HostKeyCallback
|
||||
|
||||
// IsRevoked is called for each certificate so that revocation checking
|
||||
// can be implemented. It should return true if the given certificate
|
||||
|
@ -290,8 +298,17 @@ func (c *CertChecker) CheckHostKey(addr string, remote net.Addr, key PublicKey)
|
|||
if cert.CertType != HostCert {
|
||||
return fmt.Errorf("ssh: certificate presented as a host key has type %d", cert.CertType)
|
||||
}
|
||||
if !c.IsHostAuthority(cert.SignatureKey, addr) {
|
||||
return fmt.Errorf("ssh: no authorities for hostname: %v", addr)
|
||||
}
|
||||
|
||||
return c.CheckCert(addr, cert)
|
||||
hostname, _, err := net.SplitHostPort(addr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Pass hostname only as principal for host certificates (consistent with OpenSSH)
|
||||
return c.CheckCert(hostname, cert)
|
||||
}
|
||||
|
||||
// Authenticate checks a user certificate. Authenticate can be used as
|
||||
|
@ -308,6 +325,9 @@ func (c *CertChecker) Authenticate(conn ConnMetadata, pubKey PublicKey) (*Permis
|
|||
if cert.CertType != UserCert {
|
||||
return nil, fmt.Errorf("ssh: cert has type %d", cert.CertType)
|
||||
}
|
||||
if !c.IsUserAuthority(cert.SignatureKey) {
|
||||
return nil, fmt.Errorf("ssh: certificate signed by unrecognized authority")
|
||||
}
|
||||
|
||||
if err := c.CheckCert(conn.User(), cert); err != nil {
|
||||
return nil, err
|
||||
|
@ -356,10 +376,6 @@ func (c *CertChecker) CheckCert(principal string, cert *Certificate) error {
|
|||
}
|
||||
}
|
||||
|
||||
if !c.IsAuthority(cert.SignatureKey) {
|
||||
return fmt.Errorf("ssh: certificate signed by unrecognized authority")
|
||||
}
|
||||
|
||||
clock := c.Clock
|
||||
if clock == nil {
|
||||
clock = time.Now
|
||||
|
|
|
@ -461,8 +461,8 @@ func (m *mux) newChannel(chanType string, direction channelDirection, extraData
|
|||
pending: newBuffer(),
|
||||
extPending: newBuffer(),
|
||||
direction: direction,
|
||||
incomingRequests: make(chan *Request, 16),
|
||||
msg: make(chan interface{}, 16),
|
||||
incomingRequests: make(chan *Request, chanSize),
|
||||
msg: make(chan interface{}, chanSize),
|
||||
chanType: chanType,
|
||||
extraData: extraData,
|
||||
mux: m,
|
||||
|
|
|
@ -135,6 +135,7 @@ const prefixLen = 5
|
|||
type streamPacketCipher struct {
|
||||
mac hash.Hash
|
||||
cipher cipher.Stream
|
||||
etm bool
|
||||
|
||||
// The following members are to avoid per-packet allocations.
|
||||
prefix [prefixLen]byte
|
||||
|
@ -150,7 +151,14 @@ func (s *streamPacketCipher) readPacket(seqNum uint32, r io.Reader) ([]byte, err
|
|||
return nil, err
|
||||
}
|
||||
|
||||
s.cipher.XORKeyStream(s.prefix[:], s.prefix[:])
|
||||
var encryptedPaddingLength [1]byte
|
||||
if s.mac != nil && s.etm {
|
||||
copy(encryptedPaddingLength[:], s.prefix[4:5])
|
||||
s.cipher.XORKeyStream(s.prefix[4:5], s.prefix[4:5])
|
||||
} else {
|
||||
s.cipher.XORKeyStream(s.prefix[:], s.prefix[:])
|
||||
}
|
||||
|
||||
length := binary.BigEndian.Uint32(s.prefix[0:4])
|
||||
paddingLength := uint32(s.prefix[4])
|
||||
|
||||
|
@ -159,7 +167,12 @@ func (s *streamPacketCipher) readPacket(seqNum uint32, r io.Reader) ([]byte, err
|
|||
s.mac.Reset()
|
||||
binary.BigEndian.PutUint32(s.seqNumBytes[:], seqNum)
|
||||
s.mac.Write(s.seqNumBytes[:])
|
||||
s.mac.Write(s.prefix[:])
|
||||
if s.etm {
|
||||
s.mac.Write(s.prefix[:4])
|
||||
s.mac.Write(encryptedPaddingLength[:])
|
||||
} else {
|
||||
s.mac.Write(s.prefix[:])
|
||||
}
|
||||
macSize = uint32(s.mac.Size())
|
||||
}
|
||||
|
||||
|
@ -184,10 +197,17 @@ func (s *streamPacketCipher) readPacket(seqNum uint32, r io.Reader) ([]byte, err
|
|||
}
|
||||
mac := s.packetData[length-1:]
|
||||
data := s.packetData[:length-1]
|
||||
|
||||
if s.mac != nil && s.etm {
|
||||
s.mac.Write(data)
|
||||
}
|
||||
|
||||
s.cipher.XORKeyStream(data, data)
|
||||
|
||||
if s.mac != nil {
|
||||
s.mac.Write(data)
|
||||
if !s.etm {
|
||||
s.mac.Write(data)
|
||||
}
|
||||
s.macResult = s.mac.Sum(s.macResult[:0])
|
||||
if subtle.ConstantTimeCompare(s.macResult, mac) != 1 {
|
||||
return nil, errors.New("ssh: MAC failure")
|
||||
|
@ -203,7 +223,13 @@ func (s *streamPacketCipher) writePacket(seqNum uint32, w io.Writer, rand io.Rea
|
|||
return errors.New("ssh: packet too large")
|
||||
}
|
||||
|
||||
paddingLength := packetSizeMultiple - (prefixLen+len(packet))%packetSizeMultiple
|
||||
aadlen := 0
|
||||
if s.mac != nil && s.etm {
|
||||
// packet length is not encrypted for EtM modes
|
||||
aadlen = 4
|
||||
}
|
||||
|
||||
paddingLength := packetSizeMultiple - (prefixLen+len(packet)-aadlen)%packetSizeMultiple
|
||||
if paddingLength < 4 {
|
||||
paddingLength += packetSizeMultiple
|
||||
}
|
||||
|
@ -220,15 +246,37 @@ func (s *streamPacketCipher) writePacket(seqNum uint32, w io.Writer, rand io.Rea
|
|||
s.mac.Reset()
|
||||
binary.BigEndian.PutUint32(s.seqNumBytes[:], seqNum)
|
||||
s.mac.Write(s.seqNumBytes[:])
|
||||
|
||||
if s.etm {
|
||||
// For EtM algorithms, the packet length must stay unencrypted,
|
||||
// but the following data (padding length) must be encrypted
|
||||
s.cipher.XORKeyStream(s.prefix[4:5], s.prefix[4:5])
|
||||
}
|
||||
|
||||
s.mac.Write(s.prefix[:])
|
||||
|
||||
if !s.etm {
|
||||
// For non-EtM algorithms, the algorithm is applied on unencrypted data
|
||||
s.mac.Write(packet)
|
||||
s.mac.Write(padding)
|
||||
}
|
||||
}
|
||||
|
||||
if !(s.mac != nil && s.etm) {
|
||||
// For EtM algorithms, the padding length has already been encrypted
|
||||
// and the packet length must remain unencrypted
|
||||
s.cipher.XORKeyStream(s.prefix[:], s.prefix[:])
|
||||
}
|
||||
|
||||
s.cipher.XORKeyStream(packet, packet)
|
||||
s.cipher.XORKeyStream(padding, padding)
|
||||
|
||||
if s.mac != nil && s.etm {
|
||||
// For EtM algorithms, packet and padding must be encrypted
|
||||
s.mac.Write(packet)
|
||||
s.mac.Write(padding)
|
||||
}
|
||||
|
||||
s.cipher.XORKeyStream(s.prefix[:], s.prefix[:])
|
||||
s.cipher.XORKeyStream(packet, packet)
|
||||
s.cipher.XORKeyStream(padding, padding)
|
||||
|
||||
if _, err := w.Write(s.prefix[:]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -256,7 +304,7 @@ type gcmCipher struct {
|
|||
buf []byte
|
||||
}
|
||||
|
||||
func newGCMCipher(iv, key, macKey []byte) (packetCipher, error) {
|
||||
func newGCMCipher(iv, key []byte) (packetCipher, error) {
|
||||
c, err := aes.NewCipher(key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
@ -344,7 +392,9 @@ func (c *gcmCipher) readPacket(seqNum uint32, r io.Reader) ([]byte, error) {
|
|||
c.incIV()
|
||||
|
||||
padding := plain[0]
|
||||
if padding < 4 || padding >= 20 {
|
||||
if padding < 4 {
|
||||
// padding is a byte, so it automatically satisfies
|
||||
// the maximum size, which is 255.
|
||||
return nil, fmt.Errorf("ssh: illegal padding %d", padding)
|
||||
}
|
||||
|
||||
|
|
|
@ -5,15 +5,17 @@
|
|||
package ssh
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Client implements a traditional SSH client that supports shells,
|
||||
// subprocesses, port forwarding and tunneled dialing.
|
||||
// subprocesses, TCP port/streamlocal forwarding and tunneled dialing.
|
||||
type Client struct {
|
||||
Conn
|
||||
|
||||
|
@ -40,7 +42,7 @@ func (c *Client) HandleChannelOpen(channelType string) <-chan NewChannel {
|
|||
return nil
|
||||
}
|
||||
|
||||
ch = make(chan NewChannel, 16)
|
||||
ch = make(chan NewChannel, chanSize)
|
||||
c.channelHandlers[channelType] = ch
|
||||
return ch
|
||||
}
|
||||
|
@ -59,6 +61,7 @@ func NewClient(c Conn, chans <-chan NewChannel, reqs <-chan *Request) *Client {
|
|||
conn.forwards.closeAll()
|
||||
}()
|
||||
go conn.forwards.handleChannels(conn.HandleChannelOpen("forwarded-tcpip"))
|
||||
go conn.forwards.handleChannels(conn.HandleChannelOpen("forwarded-streamlocal@openssh.com"))
|
||||
return conn
|
||||
}
|
||||
|
||||
|
@ -68,6 +71,11 @@ func NewClient(c Conn, chans <-chan NewChannel, reqs <-chan *Request) *Client {
|
|||
func NewClientConn(c net.Conn, addr string, config *ClientConfig) (Conn, <-chan NewChannel, <-chan *Request, error) {
|
||||
fullConf := *config
|
||||
fullConf.SetDefaults()
|
||||
if fullConf.HostKeyCallback == nil {
|
||||
c.Close()
|
||||
return nil, nil, nil, errors.New("ssh: must specify HostKeyCallback")
|
||||
}
|
||||
|
||||
conn := &connection{
|
||||
sshConn: sshConn{conn: c},
|
||||
}
|
||||
|
@ -97,13 +105,11 @@ func (c *connection) clientHandshake(dialAddress string, config *ClientConfig) e
|
|||
c.transport = newClientTransport(
|
||||
newTransport(c.sshConn.conn, config.Rand, true /* is client */),
|
||||
c.clientVersion, c.serverVersion, config, dialAddress, c.sshConn.RemoteAddr())
|
||||
if err := c.transport.requestInitialKeyChange(); err != nil {
|
||||
if err := c.transport.waitSession(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// We just did the key change, so the session ID is established.
|
||||
c.sessionID = c.transport.getSessionID()
|
||||
|
||||
return c.clientAuthenticate(config)
|
||||
}
|
||||
|
||||
|
@ -175,6 +181,17 @@ func Dial(network, addr string, config *ClientConfig) (*Client, error) {
|
|||
return NewClient(c, chans, reqs), nil
|
||||
}
|
||||
|
||||
// HostKeyCallback is the function type used for verifying server
|
||||
// keys. A HostKeyCallback must return nil if the host key is OK, or
|
||||
// an error to reject it. It receives the hostname as passed to Dial
|
||||
// or NewClientConn. The remote address is the RemoteAddr of the
|
||||
// net.Conn underlying the the SSH connection.
|
||||
type HostKeyCallback func(hostname string, remote net.Addr, key PublicKey) error
|
||||
|
||||
// BannerCallback is the function type used for treat the banner sent by
|
||||
// the server. A BannerCallback receives the message sent by the remote server.
|
||||
type BannerCallback func(message string) error
|
||||
|
||||
// A ClientConfig structure is used to configure a Client. It must not be
|
||||
// modified after having been passed to an SSH function.
|
||||
type ClientConfig struct {
|
||||
|
@ -190,10 +207,18 @@ type ClientConfig struct {
|
|||
// be used during authentication.
|
||||
Auth []AuthMethod
|
||||
|
||||
// HostKeyCallback, if not nil, is called during the cryptographic
|
||||
// handshake to validate the server's host key. A nil HostKeyCallback
|
||||
// implies that all host keys are accepted.
|
||||
HostKeyCallback func(hostname string, remote net.Addr, key PublicKey) error
|
||||
// HostKeyCallback is called during the cryptographic
|
||||
// handshake to validate the server's host key. The client
|
||||
// configuration must supply this callback for the connection
|
||||
// to succeed. The functions InsecureIgnoreHostKey or
|
||||
// FixedHostKey can be used for simplistic host key checks.
|
||||
HostKeyCallback HostKeyCallback
|
||||
|
||||
// BannerCallback is called during the SSH dance to display a custom
|
||||
// server's message. The client configuration can supply this callback to
|
||||
// handle it as wished. The function BannerDisplayStderr can be used for
|
||||
// simplistic display on Stderr.
|
||||
BannerCallback BannerCallback
|
||||
|
||||
// ClientVersion contains the version identification string that will
|
||||
// be used for the connection. If empty, a reasonable default is used.
|
||||
|
@ -211,3 +236,43 @@ type ClientConfig struct {
|
|||
// A Timeout of zero means no timeout.
|
||||
Timeout time.Duration
|
||||
}
|
||||
|
||||
// InsecureIgnoreHostKey returns a function that can be used for
|
||||
// ClientConfig.HostKeyCallback to accept any host key. It should
|
||||
// not be used for production code.
|
||||
func InsecureIgnoreHostKey() HostKeyCallback {
|
||||
return func(hostname string, remote net.Addr, key PublicKey) error {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
type fixedHostKey struct {
|
||||
key PublicKey
|
||||
}
|
||||
|
||||
func (f *fixedHostKey) check(hostname string, remote net.Addr, key PublicKey) error {
|
||||
if f.key == nil {
|
||||
return fmt.Errorf("ssh: required host key was nil")
|
||||
}
|
||||
if !bytes.Equal(key.Marshal(), f.key.Marshal()) {
|
||||
return fmt.Errorf("ssh: host key mismatch")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// FixedHostKey returns a function for use in
|
||||
// ClientConfig.HostKeyCallback to accept only a specific host key.
|
||||
func FixedHostKey(key PublicKey) HostKeyCallback {
|
||||
hk := &fixedHostKey{key}
|
||||
return hk.check
|
||||
}
|
||||
|
||||
// BannerDisplayStderr returns a function that can be used for
|
||||
// ClientConfig.BannerCallback to display banners on os.Stderr.
|
||||
func BannerDisplayStderr() BannerCallback {
|
||||
return func(banner string) error {
|
||||
_, err := os.Stderr.WriteString(banner)
|
||||
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
|
|
@ -30,8 +30,10 @@ func (c *connection) clientAuthenticate(config *ClientConfig) error {
|
|||
// then any untried methods suggested by the server.
|
||||
tried := make(map[string]bool)
|
||||
var lastMethods []string
|
||||
|
||||
sessionID := c.transport.getSessionID()
|
||||
for auth := AuthMethod(new(noneAuth)); auth != nil; {
|
||||
ok, methods, err := auth.auth(c.transport.getSessionID(), config.User, c.transport, config.Rand)
|
||||
ok, methods, err := auth.auth(sessionID, config.User, c.transport, config.Rand)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -177,31 +179,26 @@ func (cb publicKeyCallback) method() string {
|
|||
}
|
||||
|
||||
func (cb publicKeyCallback) auth(session []byte, user string, c packetConn, rand io.Reader) (bool, []string, error) {
|
||||
// Authentication is performed in two stages. The first stage sends an
|
||||
// enquiry to test if each key is acceptable to the remote. The second
|
||||
// stage attempts to authenticate with the valid keys obtained in the
|
||||
// first stage.
|
||||
// Authentication is performed by sending an enquiry to test if a key is
|
||||
// acceptable to the remote. If the key is acceptable, the client will
|
||||
// attempt to authenticate with the valid key. If not the client will repeat
|
||||
// the process with the remaining keys.
|
||||
|
||||
signers, err := cb()
|
||||
if err != nil {
|
||||
return false, nil, err
|
||||
}
|
||||
var validKeys []Signer
|
||||
for _, signer := range signers {
|
||||
if ok, err := validateKey(signer.PublicKey(), user, c); ok {
|
||||
validKeys = append(validKeys, signer)
|
||||
} else {
|
||||
if err != nil {
|
||||
return false, nil, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// methods that may continue if this auth is not successful.
|
||||
var methods []string
|
||||
for _, signer := range validKeys {
|
||||
pub := signer.PublicKey()
|
||||
for _, signer := range signers {
|
||||
ok, err := validateKey(signer.PublicKey(), user, c)
|
||||
if err != nil {
|
||||
return false, nil, err
|
||||
}
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
pub := signer.PublicKey()
|
||||
pubKey := pub.Marshal()
|
||||
sign, err := signer.Sign(rand, buildDataSignedForAuth(session, userAuthRequestMsg{
|
||||
User: user,
|
||||
|
@ -234,13 +231,29 @@ func (cb publicKeyCallback) auth(session []byte, user string, c packetConn, rand
|
|||
if err != nil {
|
||||
return false, nil, err
|
||||
}
|
||||
if success {
|
||||
|
||||
// If authentication succeeds or the list of available methods does not
|
||||
// contain the "publickey" method, do not attempt to authenticate with any
|
||||
// other keys. According to RFC 4252 Section 7, the latter can occur when
|
||||
// additional authentication methods are required.
|
||||
if success || !containsMethod(methods, cb.method()) {
|
||||
return success, methods, err
|
||||
}
|
||||
}
|
||||
|
||||
return false, methods, nil
|
||||
}
|
||||
|
||||
func containsMethod(methods []string, method string) bool {
|
||||
for _, m := range methods {
|
||||
if m == method {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// validateKey validates the key provided is acceptable to the server.
|
||||
func validateKey(key PublicKey, user string, c packetConn) (bool, error) {
|
||||
pubKey := key.Marshal()
|
||||
|
@ -270,7 +283,9 @@ func confirmKeyAck(key PublicKey, c packetConn) (bool, error) {
|
|||
}
|
||||
switch packet[0] {
|
||||
case msgUserAuthBanner:
|
||||
// TODO(gpaul): add callback to present the banner to the user
|
||||
if err := handleBannerResponse(c, packet); err != nil {
|
||||
return false, err
|
||||
}
|
||||
case msgUserAuthPubKeyOk:
|
||||
var msg userAuthPubKeyOkMsg
|
||||
if err := Unmarshal(packet, &msg); err != nil {
|
||||
|
@ -312,7 +327,9 @@ func handleAuthResponse(c packetConn) (bool, []string, error) {
|
|||
|
||||
switch packet[0] {
|
||||
case msgUserAuthBanner:
|
||||
// TODO: add callback to present the banner to the user
|
||||
if err := handleBannerResponse(c, packet); err != nil {
|
||||
return false, nil, err
|
||||
}
|
||||
case msgUserAuthFailure:
|
||||
var msg userAuthFailureMsg
|
||||
if err := Unmarshal(packet, &msg); err != nil {
|
||||
|
@ -327,6 +344,24 @@ func handleAuthResponse(c packetConn) (bool, []string, error) {
|
|||
}
|
||||
}
|
||||
|
||||
func handleBannerResponse(c packetConn, packet []byte) error {
|
||||
var msg userAuthBannerMsg
|
||||
if err := Unmarshal(packet, &msg); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
transport, ok := c.(*handshakeTransport)
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
|
||||
if transport.bannerCallback != nil {
|
||||
return transport.bannerCallback(msg.Message)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// KeyboardInteractiveChallenge should print questions, optionally
|
||||
// disabling echoing (e.g. for passwords), and return all the answers.
|
||||
// Challenge may be called multiple times in a single session. After
|
||||
|
@ -336,7 +371,7 @@ func handleAuthResponse(c packetConn) (bool, []string, error) {
|
|||
// both CLI and GUI environments.
|
||||
type KeyboardInteractiveChallenge func(user, instruction string, questions []string, echos []bool) (answers []string, err error)
|
||||
|
||||
// KeyboardInteractive returns a AuthMethod using a prompt/response
|
||||
// KeyboardInteractive returns an AuthMethod using a prompt/response
|
||||
// sequence controlled by the server.
|
||||
func KeyboardInteractive(challenge KeyboardInteractiveChallenge) AuthMethod {
|
||||
return challenge
|
||||
|
@ -372,7 +407,9 @@ func (cb KeyboardInteractiveChallenge) auth(session []byte, user string, c packe
|
|||
// like handleAuthResponse, but with less options.
|
||||
switch packet[0] {
|
||||
case msgUserAuthBanner:
|
||||
// TODO: Print banners during userauth.
|
||||
if err := handleBannerResponse(c, packet); err != nil {
|
||||
return false, nil, err
|
||||
}
|
||||
continue
|
||||
case msgUserAuthInfoRequest:
|
||||
// OK
|
||||
|
|
|
@ -9,6 +9,7 @@ import (
|
|||
"crypto/rand"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
"sync"
|
||||
|
||||
_ "crypto/sha1"
|
||||
|
@ -40,7 +41,7 @@ var supportedKexAlgos = []string{
|
|||
kexAlgoDH14SHA1, kexAlgoDH1SHA1,
|
||||
}
|
||||
|
||||
// supportedKexAlgos specifies the supported host-key algorithms (i.e. methods
|
||||
// supportedHostKeyAlgos specifies the supported host-key algorithms (i.e. methods
|
||||
// of authenticating servers) in preference order.
|
||||
var supportedHostKeyAlgos = []string{
|
||||
CertAlgoRSAv01, CertAlgoDSAv01, CertAlgoECDSA256v01,
|
||||
|
@ -56,7 +57,7 @@ var supportedHostKeyAlgos = []string{
|
|||
// This is based on RFC 4253, section 6.4, but with hmac-md5 variants removed
|
||||
// because they have reached the end of their useful life.
|
||||
var supportedMACs = []string{
|
||||
"hmac-sha2-256", "hmac-sha1", "hmac-sha1-96",
|
||||
"hmac-sha2-256-etm@openssh.com", "hmac-sha2-256", "hmac-sha1", "hmac-sha1-96",
|
||||
}
|
||||
|
||||
var supportedCompressions = []string{compressionNone}
|
||||
|
@ -104,6 +105,21 @@ type directionAlgorithms struct {
|
|||
Compression string
|
||||
}
|
||||
|
||||
// rekeyBytes returns a rekeying intervals in bytes.
|
||||
func (a *directionAlgorithms) rekeyBytes() int64 {
|
||||
// According to RFC4344 block ciphers should rekey after
|
||||
// 2^(BLOCKSIZE/4) blocks. For all AES flavors BLOCKSIZE is
|
||||
// 128.
|
||||
switch a.Cipher {
|
||||
case "aes128-ctr", "aes192-ctr", "aes256-ctr", gcmCipherID, aes128cbcID:
|
||||
return 16 * (1 << 32)
|
||||
|
||||
}
|
||||
|
||||
// For others, stick with RFC4253 recommendation to rekey after 1 Gb of data.
|
||||
return 1 << 30
|
||||
}
|
||||
|
||||
type algorithms struct {
|
||||
kex string
|
||||
hostKey string
|
||||
|
@ -171,7 +187,7 @@ type Config struct {
|
|||
|
||||
// The maximum number of bytes sent or received after which a
|
||||
// new key is negotiated. It must be at least 256. If
|
||||
// unspecified, 1 gigabyte is used.
|
||||
// unspecified, a size suitable for the chosen cipher is used.
|
||||
RekeyThreshold uint64
|
||||
|
||||
// The allowed key exchanges algorithms. If unspecified then a
|
||||
|
@ -215,11 +231,12 @@ func (c *Config) SetDefaults() {
|
|||
}
|
||||
|
||||
if c.RekeyThreshold == 0 {
|
||||
// RFC 4253, section 9 suggests rekeying after 1G.
|
||||
c.RekeyThreshold = 1 << 30
|
||||
}
|
||||
if c.RekeyThreshold < minRekeyThreshold {
|
||||
// cipher specific default
|
||||
} else if c.RekeyThreshold < minRekeyThreshold {
|
||||
c.RekeyThreshold = minRekeyThreshold
|
||||
} else if c.RekeyThreshold >= math.MaxInt64 {
|
||||
// Avoid weirdness if somebody uses -1 as a threshold.
|
||||
c.RekeyThreshold = math.MaxInt64
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -25,7 +25,7 @@ type ConnMetadata interface {
|
|||
// User returns the user ID for this connection.
|
||||
User() string
|
||||
|
||||
// SessionID returns the sesson hash, also denoted by H.
|
||||
// SessionID returns the session hash, also denoted by H.
|
||||
SessionID() []byte
|
||||
|
||||
// ClientVersion returns the client's version string as hashed
|
||||
|
|
|
@ -14,5 +14,8 @@ others.
|
|||
References:
|
||||
[PROTOCOL.certkeys]: http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.bin/ssh/PROTOCOL.certkeys?rev=HEAD
|
||||
[SSH-PARAMETERS]: http://www.iana.org/assignments/ssh-parameters/ssh-parameters.xml#ssh-parameters-1
|
||||
|
||||
This package does not fall under the stability promise of the Go language itself,
|
||||
so its API may be changed when pressing needs arise.
|
||||
*/
|
||||
package ssh // import "golang.org/x/crypto/ssh"
|
||||
|
|
|
@ -19,6 +19,11 @@ import (
|
|||
// messages are wrong when using ECDH.
|
||||
const debugHandshake = false
|
||||
|
||||
// chanSize sets the amount of buffering SSH connections. This is
|
||||
// primarily for testing: setting chanSize=0 uncovers deadlocks more
|
||||
// quickly.
|
||||
const chanSize = 16
|
||||
|
||||
// keyingTransport is a packet based transport that supports key
|
||||
// changes. It need not be thread-safe. It should pass through
|
||||
// msgNewKeys in both directions.
|
||||
|
@ -53,34 +58,65 @@ type handshakeTransport struct {
|
|||
incoming chan []byte
|
||||
readError error
|
||||
|
||||
mu sync.Mutex
|
||||
writeError error
|
||||
sentInitPacket []byte
|
||||
sentInitMsg *kexInitMsg
|
||||
pendingPackets [][]byte // Used when a key exchange is in progress.
|
||||
|
||||
// If the read loop wants to schedule a kex, it pings this
|
||||
// channel, and the write loop will send out a kex
|
||||
// message.
|
||||
requestKex chan struct{}
|
||||
|
||||
// If the other side requests or confirms a kex, its kexInit
|
||||
// packet is sent here for the write loop to find it.
|
||||
startKex chan *pendingKex
|
||||
|
||||
// data for host key checking
|
||||
hostKeyCallback func(hostname string, remote net.Addr, key PublicKey) error
|
||||
hostKeyCallback HostKeyCallback
|
||||
dialAddress string
|
||||
remoteAddr net.Addr
|
||||
|
||||
readSinceKex uint64
|
||||
// bannerCallback is non-empty if we are the client and it has been set in
|
||||
// ClientConfig. In that case it is called during the user authentication
|
||||
// dance to handle a custom server's message.
|
||||
bannerCallback BannerCallback
|
||||
|
||||
// Protects the writing side of the connection
|
||||
mu sync.Mutex
|
||||
cond *sync.Cond
|
||||
sentInitPacket []byte
|
||||
sentInitMsg *kexInitMsg
|
||||
writtenSinceKex uint64
|
||||
writeError error
|
||||
// Algorithms agreed in the last key exchange.
|
||||
algorithms *algorithms
|
||||
|
||||
readPacketsLeft uint32
|
||||
readBytesLeft int64
|
||||
|
||||
writePacketsLeft uint32
|
||||
writeBytesLeft int64
|
||||
|
||||
// The session ID or nil if first kex did not complete yet.
|
||||
sessionID []byte
|
||||
}
|
||||
|
||||
type pendingKex struct {
|
||||
otherInit []byte
|
||||
done chan error
|
||||
}
|
||||
|
||||
func newHandshakeTransport(conn keyingTransport, config *Config, clientVersion, serverVersion []byte) *handshakeTransport {
|
||||
t := &handshakeTransport{
|
||||
conn: conn,
|
||||
serverVersion: serverVersion,
|
||||
clientVersion: clientVersion,
|
||||
incoming: make(chan []byte, 16),
|
||||
config: config,
|
||||
incoming: make(chan []byte, chanSize),
|
||||
requestKex: make(chan struct{}, 1),
|
||||
startKex: make(chan *pendingKex, 1),
|
||||
|
||||
config: config,
|
||||
}
|
||||
t.cond = sync.NewCond(&t.mu)
|
||||
t.resetReadThresholds()
|
||||
t.resetWriteThresholds()
|
||||
|
||||
// We always start with a mandatory key exchange.
|
||||
t.requestKex <- struct{}{}
|
||||
return t
|
||||
}
|
||||
|
||||
|
@ -89,12 +125,14 @@ func newClientTransport(conn keyingTransport, clientVersion, serverVersion []byt
|
|||
t.dialAddress = dialAddr
|
||||
t.remoteAddr = addr
|
||||
t.hostKeyCallback = config.HostKeyCallback
|
||||
t.bannerCallback = config.BannerCallback
|
||||
if config.HostKeyAlgorithms != nil {
|
||||
t.hostKeyAlgorithms = config.HostKeyAlgorithms
|
||||
} else {
|
||||
t.hostKeyAlgorithms = supportedHostKeyAlgos
|
||||
}
|
||||
go t.readLoop()
|
||||
go t.kexLoop()
|
||||
return t
|
||||
}
|
||||
|
||||
|
@ -102,6 +140,7 @@ func newServerTransport(conn keyingTransport, clientVersion, serverVersion []byt
|
|||
t := newHandshakeTransport(conn, &config.Config, clientVersion, serverVersion)
|
||||
t.hostKeys = config.hostKeys
|
||||
go t.readLoop()
|
||||
go t.kexLoop()
|
||||
return t
|
||||
}
|
||||
|
||||
|
@ -109,6 +148,20 @@ func (t *handshakeTransport) getSessionID() []byte {
|
|||
return t.sessionID
|
||||
}
|
||||
|
||||
// waitSession waits for the session to be established. This should be
|
||||
// the first thing to call after instantiating handshakeTransport.
|
||||
func (t *handshakeTransport) waitSession() error {
|
||||
p, err := t.readPacket()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if p[0] != msgNewKeys {
|
||||
return fmt.Errorf("ssh: first packet should be msgNewKeys")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *handshakeTransport) id() string {
|
||||
if len(t.hostKeys) > 0 {
|
||||
return "server"
|
||||
|
@ -116,6 +169,20 @@ func (t *handshakeTransport) id() string {
|
|||
return "client"
|
||||
}
|
||||
|
||||
func (t *handshakeTransport) printPacket(p []byte, write bool) {
|
||||
action := "got"
|
||||
if write {
|
||||
action = "sent"
|
||||
}
|
||||
|
||||
if p[0] == msgChannelData || p[0] == msgChannelExtendedData {
|
||||
log.Printf("%s %s data (packet %d bytes)", t.id(), action, len(p))
|
||||
} else {
|
||||
msg, err := decode(p)
|
||||
log.Printf("%s %s %T %v (%v)", t.id(), action, msg, msg, err)
|
||||
}
|
||||
}
|
||||
|
||||
func (t *handshakeTransport) readPacket() ([]byte, error) {
|
||||
p, ok := <-t.incoming
|
||||
if !ok {
|
||||
|
@ -125,8 +192,10 @@ func (t *handshakeTransport) readPacket() ([]byte, error) {
|
|||
}
|
||||
|
||||
func (t *handshakeTransport) readLoop() {
|
||||
first := true
|
||||
for {
|
||||
p, err := t.readOnePacket()
|
||||
p, err := t.readOnePacket(first)
|
||||
first = false
|
||||
if err != nil {
|
||||
t.readError = err
|
||||
close(t.incoming)
|
||||
|
@ -138,67 +207,217 @@ func (t *handshakeTransport) readLoop() {
|
|||
t.incoming <- p
|
||||
}
|
||||
|
||||
// If we can't read, declare the writing part dead too.
|
||||
t.mu.Lock()
|
||||
defer t.mu.Unlock()
|
||||
if t.writeError == nil {
|
||||
t.writeError = t.readError
|
||||
}
|
||||
t.cond.Broadcast()
|
||||
// Stop writers too.
|
||||
t.recordWriteError(t.readError)
|
||||
|
||||
// Unblock the writer should it wait for this.
|
||||
close(t.startKex)
|
||||
|
||||
// Don't close t.requestKex; it's also written to from writePacket.
|
||||
}
|
||||
|
||||
func (t *handshakeTransport) readOnePacket() ([]byte, error) {
|
||||
if t.readSinceKex > t.config.RekeyThreshold {
|
||||
if err := t.requestKeyChange(); err != nil {
|
||||
return nil, err
|
||||
func (t *handshakeTransport) pushPacket(p []byte) error {
|
||||
if debugHandshake {
|
||||
t.printPacket(p, true)
|
||||
}
|
||||
return t.conn.writePacket(p)
|
||||
}
|
||||
|
||||
func (t *handshakeTransport) getWriteError() error {
|
||||
t.mu.Lock()
|
||||
defer t.mu.Unlock()
|
||||
return t.writeError
|
||||
}
|
||||
|
||||
func (t *handshakeTransport) recordWriteError(err error) {
|
||||
t.mu.Lock()
|
||||
defer t.mu.Unlock()
|
||||
if t.writeError == nil && err != nil {
|
||||
t.writeError = err
|
||||
}
|
||||
}
|
||||
|
||||
func (t *handshakeTransport) requestKeyExchange() {
|
||||
select {
|
||||
case t.requestKex <- struct{}{}:
|
||||
default:
|
||||
// something already requested a kex, so do nothing.
|
||||
}
|
||||
}
|
||||
|
||||
func (t *handshakeTransport) resetWriteThresholds() {
|
||||
t.writePacketsLeft = packetRekeyThreshold
|
||||
if t.config.RekeyThreshold > 0 {
|
||||
t.writeBytesLeft = int64(t.config.RekeyThreshold)
|
||||
} else if t.algorithms != nil {
|
||||
t.writeBytesLeft = t.algorithms.w.rekeyBytes()
|
||||
} else {
|
||||
t.writeBytesLeft = 1 << 30
|
||||
}
|
||||
}
|
||||
|
||||
func (t *handshakeTransport) kexLoop() {
|
||||
|
||||
write:
|
||||
for t.getWriteError() == nil {
|
||||
var request *pendingKex
|
||||
var sent bool
|
||||
|
||||
for request == nil || !sent {
|
||||
var ok bool
|
||||
select {
|
||||
case request, ok = <-t.startKex:
|
||||
if !ok {
|
||||
break write
|
||||
}
|
||||
case <-t.requestKex:
|
||||
break
|
||||
}
|
||||
|
||||
if !sent {
|
||||
if err := t.sendKexInit(); err != nil {
|
||||
t.recordWriteError(err)
|
||||
break
|
||||
}
|
||||
sent = true
|
||||
}
|
||||
}
|
||||
|
||||
if err := t.getWriteError(); err != nil {
|
||||
if request != nil {
|
||||
request.done <- err
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
// We're not servicing t.requestKex, but that is OK:
|
||||
// we never block on sending to t.requestKex.
|
||||
|
||||
// We're not servicing t.startKex, but the remote end
|
||||
// has just sent us a kexInitMsg, so it can't send
|
||||
// another key change request, until we close the done
|
||||
// channel on the pendingKex request.
|
||||
|
||||
err := t.enterKeyExchange(request.otherInit)
|
||||
|
||||
t.mu.Lock()
|
||||
t.writeError = err
|
||||
t.sentInitPacket = nil
|
||||
t.sentInitMsg = nil
|
||||
|
||||
t.resetWriteThresholds()
|
||||
|
||||
// we have completed the key exchange. Since the
|
||||
// reader is still blocked, it is safe to clear out
|
||||
// the requestKex channel. This avoids the situation
|
||||
// where: 1) we consumed our own request for the
|
||||
// initial kex, and 2) the kex from the remote side
|
||||
// caused another send on the requestKex channel,
|
||||
clear:
|
||||
for {
|
||||
select {
|
||||
case <-t.requestKex:
|
||||
//
|
||||
default:
|
||||
break clear
|
||||
}
|
||||
}
|
||||
|
||||
request.done <- t.writeError
|
||||
|
||||
// kex finished. Push packets that we received while
|
||||
// the kex was in progress. Don't look at t.startKex
|
||||
// and don't increment writtenSinceKex: if we trigger
|
||||
// another kex while we are still busy with the last
|
||||
// one, things will become very confusing.
|
||||
for _, p := range t.pendingPackets {
|
||||
t.writeError = t.pushPacket(p)
|
||||
if t.writeError != nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
t.pendingPackets = t.pendingPackets[:0]
|
||||
t.mu.Unlock()
|
||||
}
|
||||
|
||||
// drain startKex channel. We don't service t.requestKex
|
||||
// because nobody does blocking sends there.
|
||||
go func() {
|
||||
for init := range t.startKex {
|
||||
init.done <- t.writeError
|
||||
}
|
||||
}()
|
||||
|
||||
// Unblock reader.
|
||||
t.conn.Close()
|
||||
}
|
||||
|
||||
// The protocol uses uint32 for packet counters, so we can't let them
|
||||
// reach 1<<32. We will actually read and write more packets than
|
||||
// this, though: the other side may send more packets, and after we
|
||||
// hit this limit on writing we will send a few more packets for the
|
||||
// key exchange itself.
|
||||
const packetRekeyThreshold = (1 << 31)
|
||||
|
||||
func (t *handshakeTransport) resetReadThresholds() {
|
||||
t.readPacketsLeft = packetRekeyThreshold
|
||||
if t.config.RekeyThreshold > 0 {
|
||||
t.readBytesLeft = int64(t.config.RekeyThreshold)
|
||||
} else if t.algorithms != nil {
|
||||
t.readBytesLeft = t.algorithms.r.rekeyBytes()
|
||||
} else {
|
||||
t.readBytesLeft = 1 << 30
|
||||
}
|
||||
}
|
||||
|
||||
func (t *handshakeTransport) readOnePacket(first bool) ([]byte, error) {
|
||||
p, err := t.conn.readPacket()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
t.readSinceKex += uint64(len(p))
|
||||
if debugHandshake {
|
||||
if p[0] == msgChannelData || p[0] == msgChannelExtendedData {
|
||||
log.Printf("%s got data (packet %d bytes)", t.id(), len(p))
|
||||
} else {
|
||||
msg, err := decode(p)
|
||||
log.Printf("%s got %T %v (%v)", t.id(), msg, msg, err)
|
||||
}
|
||||
if t.readPacketsLeft > 0 {
|
||||
t.readPacketsLeft--
|
||||
} else {
|
||||
t.requestKeyExchange()
|
||||
}
|
||||
|
||||
if t.readBytesLeft > 0 {
|
||||
t.readBytesLeft -= int64(len(p))
|
||||
} else {
|
||||
t.requestKeyExchange()
|
||||
}
|
||||
|
||||
if debugHandshake {
|
||||
t.printPacket(p, false)
|
||||
}
|
||||
|
||||
if first && p[0] != msgKexInit {
|
||||
return nil, fmt.Errorf("ssh: first packet should be msgKexInit")
|
||||
}
|
||||
|
||||
if p[0] != msgKexInit {
|
||||
return p, nil
|
||||
}
|
||||
|
||||
t.mu.Lock()
|
||||
|
||||
firstKex := t.sessionID == nil
|
||||
|
||||
err = t.enterKeyExchangeLocked(p)
|
||||
if err != nil {
|
||||
// drop connection
|
||||
t.conn.Close()
|
||||
t.writeError = err
|
||||
kex := pendingKex{
|
||||
done: make(chan error, 1),
|
||||
otherInit: p,
|
||||
}
|
||||
t.startKex <- &kex
|
||||
err = <-kex.done
|
||||
|
||||
if debugHandshake {
|
||||
log.Printf("%s exited key exchange (first %v), err %v", t.id(), firstKex, err)
|
||||
}
|
||||
|
||||
// Unblock writers.
|
||||
t.sentInitMsg = nil
|
||||
t.sentInitPacket = nil
|
||||
t.cond.Broadcast()
|
||||
t.writtenSinceKex = 0
|
||||
t.mu.Unlock()
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
t.readSinceKex = 0
|
||||
t.resetReadThresholds()
|
||||
|
||||
// By default, a key exchange is hidden from higher layers by
|
||||
// translating it into msgIgnore.
|
||||
|
@ -213,61 +432,16 @@ func (t *handshakeTransport) readOnePacket() ([]byte, error) {
|
|||
return successPacket, nil
|
||||
}
|
||||
|
||||
// keyChangeCategory describes whether a key exchange is the first on a
|
||||
// connection, or a subsequent one.
|
||||
type keyChangeCategory bool
|
||||
|
||||
const (
|
||||
firstKeyExchange keyChangeCategory = true
|
||||
subsequentKeyExchange keyChangeCategory = false
|
||||
)
|
||||
|
||||
// sendKexInit sends a key change message, and returns the message
|
||||
// that was sent. After initiating the key change, all writes will be
|
||||
// blocked until the change is done, and a failed key change will
|
||||
// close the underlying transport. This function is safe for
|
||||
// concurrent use by multiple goroutines.
|
||||
func (t *handshakeTransport) sendKexInit(isFirst keyChangeCategory) error {
|
||||
var err error
|
||||
|
||||
// sendKexInit sends a key change message.
|
||||
func (t *handshakeTransport) sendKexInit() error {
|
||||
t.mu.Lock()
|
||||
// If this is the initial key change, but we already have a sessionID,
|
||||
// then do nothing because the key exchange has already completed
|
||||
// asynchronously.
|
||||
if !isFirst || t.sessionID == nil {
|
||||
_, _, err = t.sendKexInitLocked(isFirst)
|
||||
}
|
||||
t.mu.Unlock()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if isFirst {
|
||||
if packet, err := t.readPacket(); err != nil {
|
||||
return err
|
||||
} else if packet[0] != msgNewKeys {
|
||||
return unexpectedMessageError(msgNewKeys, packet[0])
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *handshakeTransport) requestInitialKeyChange() error {
|
||||
return t.sendKexInit(firstKeyExchange)
|
||||
}
|
||||
|
||||
func (t *handshakeTransport) requestKeyChange() error {
|
||||
return t.sendKexInit(subsequentKeyExchange)
|
||||
}
|
||||
|
||||
// sendKexInitLocked sends a key change message. t.mu must be locked
|
||||
// while this happens.
|
||||
func (t *handshakeTransport) sendKexInitLocked(isFirst keyChangeCategory) (*kexInitMsg, []byte, error) {
|
||||
// kexInits may be sent either in response to the other side,
|
||||
// or because our side wants to initiate a key change, so we
|
||||
// may have already sent a kexInit. In that case, don't send a
|
||||
// second kexInit.
|
||||
defer t.mu.Unlock()
|
||||
if t.sentInitMsg != nil {
|
||||
return t.sentInitMsg, t.sentInitPacket, nil
|
||||
// kexInits may be sent either in response to the other side,
|
||||
// or because our side wants to initiate a key change, so we
|
||||
// may have already sent a kexInit. In that case, don't send a
|
||||
// second kexInit.
|
||||
return nil
|
||||
}
|
||||
|
||||
msg := &kexInitMsg{
|
||||
|
@ -295,53 +469,65 @@ func (t *handshakeTransport) sendKexInitLocked(isFirst keyChangeCategory) (*kexI
|
|||
packetCopy := make([]byte, len(packet))
|
||||
copy(packetCopy, packet)
|
||||
|
||||
if err := t.conn.writePacket(packetCopy); err != nil {
|
||||
return nil, nil, err
|
||||
if err := t.pushPacket(packetCopy); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
t.sentInitMsg = msg
|
||||
t.sentInitPacket = packet
|
||||
return msg, packet, nil
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *handshakeTransport) writePacket(p []byte) error {
|
||||
t.mu.Lock()
|
||||
defer t.mu.Unlock()
|
||||
|
||||
if t.writtenSinceKex > t.config.RekeyThreshold {
|
||||
t.sendKexInitLocked(subsequentKeyExchange)
|
||||
}
|
||||
for t.sentInitMsg != nil && t.writeError == nil {
|
||||
t.cond.Wait()
|
||||
}
|
||||
if t.writeError != nil {
|
||||
return t.writeError
|
||||
}
|
||||
t.writtenSinceKex += uint64(len(p))
|
||||
|
||||
switch p[0] {
|
||||
case msgKexInit:
|
||||
return errors.New("ssh: only handshakeTransport can send kexInit")
|
||||
case msgNewKeys:
|
||||
return errors.New("ssh: only handshakeTransport can send newKeys")
|
||||
default:
|
||||
return t.conn.writePacket(p)
|
||||
}
|
||||
|
||||
t.mu.Lock()
|
||||
defer t.mu.Unlock()
|
||||
if t.writeError != nil {
|
||||
return t.writeError
|
||||
}
|
||||
|
||||
if t.sentInitMsg != nil {
|
||||
// Copy the packet so the writer can reuse the buffer.
|
||||
cp := make([]byte, len(p))
|
||||
copy(cp, p)
|
||||
t.pendingPackets = append(t.pendingPackets, cp)
|
||||
return nil
|
||||
}
|
||||
|
||||
if t.writeBytesLeft > 0 {
|
||||
t.writeBytesLeft -= int64(len(p))
|
||||
} else {
|
||||
t.requestKeyExchange()
|
||||
}
|
||||
|
||||
if t.writePacketsLeft > 0 {
|
||||
t.writePacketsLeft--
|
||||
} else {
|
||||
t.requestKeyExchange()
|
||||
}
|
||||
|
||||
if err := t.pushPacket(p); err != nil {
|
||||
t.writeError = err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *handshakeTransport) Close() error {
|
||||
return t.conn.Close()
|
||||
}
|
||||
|
||||
// enterKeyExchange runs the key exchange. t.mu must be held while running this.
|
||||
func (t *handshakeTransport) enterKeyExchangeLocked(otherInitPacket []byte) error {
|
||||
func (t *handshakeTransport) enterKeyExchange(otherInitPacket []byte) error {
|
||||
if debugHandshake {
|
||||
log.Printf("%s entered key exchange", t.id())
|
||||
}
|
||||
myInit, myInitPacket, err := t.sendKexInitLocked(subsequentKeyExchange)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
otherInit := &kexInitMsg{}
|
||||
if err := Unmarshal(otherInitPacket, otherInit); err != nil {
|
||||
|
@ -352,20 +538,20 @@ func (t *handshakeTransport) enterKeyExchangeLocked(otherInitPacket []byte) erro
|
|||
clientVersion: t.clientVersion,
|
||||
serverVersion: t.serverVersion,
|
||||
clientKexInit: otherInitPacket,
|
||||
serverKexInit: myInitPacket,
|
||||
serverKexInit: t.sentInitPacket,
|
||||
}
|
||||
|
||||
clientInit := otherInit
|
||||
serverInit := myInit
|
||||
serverInit := t.sentInitMsg
|
||||
if len(t.hostKeys) == 0 {
|
||||
clientInit = myInit
|
||||
serverInit = otherInit
|
||||
clientInit, serverInit = serverInit, clientInit
|
||||
|
||||
magics.clientKexInit = myInitPacket
|
||||
magics.clientKexInit = t.sentInitPacket
|
||||
magics.serverKexInit = otherInitPacket
|
||||
}
|
||||
|
||||
algs, err := findAgreedAlgorithms(clientInit, serverInit)
|
||||
var err error
|
||||
t.algorithms, err = findAgreedAlgorithms(clientInit, serverInit)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -388,16 +574,16 @@ func (t *handshakeTransport) enterKeyExchangeLocked(otherInitPacket []byte) erro
|
|||
}
|
||||
}
|
||||
|
||||
kex, ok := kexAlgoMap[algs.kex]
|
||||
kex, ok := kexAlgoMap[t.algorithms.kex]
|
||||
if !ok {
|
||||
return fmt.Errorf("ssh: unexpected key exchange algorithm %v", algs.kex)
|
||||
return fmt.Errorf("ssh: unexpected key exchange algorithm %v", t.algorithms.kex)
|
||||
}
|
||||
|
||||
var result *kexResult
|
||||
if len(t.hostKeys) > 0 {
|
||||
result, err = t.server(kex, algs, &magics)
|
||||
result, err = t.server(kex, t.algorithms, &magics)
|
||||
} else {
|
||||
result, err = t.client(kex, algs, &magics)
|
||||
result, err = t.client(kex, t.algorithms, &magics)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
|
@ -409,7 +595,9 @@ func (t *handshakeTransport) enterKeyExchangeLocked(otherInitPacket []byte) erro
|
|||
}
|
||||
result.SessionID = t.sessionID
|
||||
|
||||
t.conn.prepareKeyChange(algs, result)
|
||||
if err := t.conn.prepareKeyChange(t.algorithms, result); err != nil {
|
||||
return err
|
||||
}
|
||||
if err = t.conn.writePacket([]byte{msgNewKeys}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -449,11 +637,9 @@ func (t *handshakeTransport) client(kex kexAlgorithm, algs *algorithms, magics *
|
|||
return nil, err
|
||||
}
|
||||
|
||||
if t.hostKeyCallback != nil {
|
||||
err = t.hostKeyCallback(t.dialAddress, t.remoteAddr, hostKey)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
err = t.hostKeyCallback(t.dialAddress, t.remoteAddr, hostKey)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return result, nil
|
||||
|
|
|
@ -383,8 +383,8 @@ func init() {
|
|||
// 4253 and Oakley Group 2 in RFC 2409.
|
||||
p, _ := new(big.Int).SetString("FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE65381FFFFFFFFFFFFFFFF", 16)
|
||||
kexAlgoMap[kexAlgoDH1SHA1] = &dhGroup{
|
||||
g: new(big.Int).SetInt64(2),
|
||||
p: p,
|
||||
g: new(big.Int).SetInt64(2),
|
||||
p: p,
|
||||
pMinus1: new(big.Int).Sub(p, bigOne),
|
||||
}
|
||||
|
||||
|
@ -393,8 +393,8 @@ func init() {
|
|||
p, _ = new(big.Int).SetString("FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF6955817183995497CEA956AE515D2261898FA051015728E5A8AACAA68FFFFFFFFFFFFFFFF", 16)
|
||||
|
||||
kexAlgoMap[kexAlgoDH14SHA1] = &dhGroup{
|
||||
g: new(big.Int).SetInt64(2),
|
||||
p: p,
|
||||
g: new(big.Int).SetInt64(2),
|
||||
p: p,
|
||||
pMinus1: new(big.Int).Sub(p, bigOne),
|
||||
}
|
||||
|
||||
|
|
|
@ -10,10 +10,13 @@ import (
|
|||
"crypto/dsa"
|
||||
"crypto/ecdsa"
|
||||
"crypto/elliptic"
|
||||
"crypto/md5"
|
||||
"crypto/rsa"
|
||||
"crypto/sha256"
|
||||
"crypto/x509"
|
||||
"encoding/asn1"
|
||||
"encoding/base64"
|
||||
"encoding/hex"
|
||||
"encoding/pem"
|
||||
"errors"
|
||||
"fmt"
|
||||
|
@ -364,6 +367,17 @@ func (r *dsaPublicKey) Type() string {
|
|||
return "ssh-dss"
|
||||
}
|
||||
|
||||
func checkDSAParams(param *dsa.Parameters) error {
|
||||
// SSH specifies FIPS 186-2, which only provided a single size
|
||||
// (1024 bits) DSA key. FIPS 186-3 allows for larger key
|
||||
// sizes, which would confuse SSH.
|
||||
if l := param.P.BitLen(); l != 1024 {
|
||||
return fmt.Errorf("ssh: unsupported DSA key size %d", l)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// parseDSA parses an DSA key according to RFC 4253, section 6.6.
|
||||
func parseDSA(in []byte) (out PublicKey, rest []byte, err error) {
|
||||
var w struct {
|
||||
|
@ -374,13 +388,18 @@ func parseDSA(in []byte) (out PublicKey, rest []byte, err error) {
|
|||
return nil, nil, err
|
||||
}
|
||||
|
||||
param := dsa.Parameters{
|
||||
P: w.P,
|
||||
Q: w.Q,
|
||||
G: w.G,
|
||||
}
|
||||
if err := checkDSAParams(¶m); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
key := &dsaPublicKey{
|
||||
Parameters: dsa.Parameters{
|
||||
P: w.P,
|
||||
Q: w.Q,
|
||||
G: w.G,
|
||||
},
|
||||
Y: w.Y,
|
||||
Parameters: param,
|
||||
Y: w.Y,
|
||||
}
|
||||
return key, w.Rest, nil
|
||||
}
|
||||
|
@ -627,19 +646,28 @@ func (k *ecdsaPublicKey) CryptoPublicKey() crypto.PublicKey {
|
|||
}
|
||||
|
||||
// NewSignerFromKey takes an *rsa.PrivateKey, *dsa.PrivateKey,
|
||||
// *ecdsa.PrivateKey or any other crypto.Signer and returns a corresponding
|
||||
// Signer instance. ECDSA keys must use P-256, P-384 or P-521.
|
||||
// *ecdsa.PrivateKey or any other crypto.Signer and returns a
|
||||
// corresponding Signer instance. ECDSA keys must use P-256, P-384 or
|
||||
// P-521. DSA keys must use parameter size L1024N160.
|
||||
func NewSignerFromKey(key interface{}) (Signer, error) {
|
||||
switch key := key.(type) {
|
||||
case crypto.Signer:
|
||||
return NewSignerFromSigner(key)
|
||||
case *dsa.PrivateKey:
|
||||
return &dsaPrivateKey{key}, nil
|
||||
return newDSAPrivateKey(key)
|
||||
default:
|
||||
return nil, fmt.Errorf("ssh: unsupported key type %T", key)
|
||||
}
|
||||
}
|
||||
|
||||
func newDSAPrivateKey(key *dsa.PrivateKey) (Signer, error) {
|
||||
if err := checkDSAParams(&key.PublicKey.Parameters); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &dsaPrivateKey{key}, nil
|
||||
}
|
||||
|
||||
type wrappedSigner struct {
|
||||
signer crypto.Signer
|
||||
pubKey PublicKey
|
||||
|
@ -753,6 +781,18 @@ func ParsePrivateKey(pemBytes []byte) (Signer, error) {
|
|||
return NewSignerFromKey(key)
|
||||
}
|
||||
|
||||
// ParsePrivateKeyWithPassphrase returns a Signer from a PEM encoded private
|
||||
// key and passphrase. It supports the same keys as
|
||||
// ParseRawPrivateKeyWithPassphrase.
|
||||
func ParsePrivateKeyWithPassphrase(pemBytes, passPhrase []byte) (Signer, error) {
|
||||
key, err := ParseRawPrivateKeyWithPassphrase(pemBytes, passPhrase)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return NewSignerFromKey(key)
|
||||
}
|
||||
|
||||
// encryptedBlock tells whether a private key is
|
||||
// encrypted by examining its Proc-Type header
|
||||
// for a mention of ENCRYPTED
|
||||
|
@ -787,6 +827,43 @@ func ParseRawPrivateKey(pemBytes []byte) (interface{}, error) {
|
|||
}
|
||||
}
|
||||
|
||||
// ParseRawPrivateKeyWithPassphrase returns a private key decrypted with
|
||||
// passphrase from a PEM encoded private key. If wrong passphrase, return
|
||||
// x509.IncorrectPasswordError.
|
||||
func ParseRawPrivateKeyWithPassphrase(pemBytes, passPhrase []byte) (interface{}, error) {
|
||||
block, _ := pem.Decode(pemBytes)
|
||||
if block == nil {
|
||||
return nil, errors.New("ssh: no key found")
|
||||
}
|
||||
buf := block.Bytes
|
||||
|
||||
if encryptedBlock(block) {
|
||||
if x509.IsEncryptedPEMBlock(block) {
|
||||
var err error
|
||||
buf, err = x509.DecryptPEMBlock(block, passPhrase)
|
||||
if err != nil {
|
||||
if err == x509.IncorrectPasswordError {
|
||||
return nil, err
|
||||
}
|
||||
return nil, fmt.Errorf("ssh: cannot decode encrypted private keys: %v", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
switch block.Type {
|
||||
case "RSA PRIVATE KEY":
|
||||
return x509.ParsePKCS1PrivateKey(buf)
|
||||
case "EC PRIVATE KEY":
|
||||
return x509.ParseECPrivateKey(buf)
|
||||
case "DSA PRIVATE KEY":
|
||||
return ParseDSAPrivateKey(buf)
|
||||
case "OPENSSH PRIVATE KEY":
|
||||
return parseOpenSSHPrivateKey(buf)
|
||||
default:
|
||||
return nil, fmt.Errorf("ssh: unsupported key type %q", block.Type)
|
||||
}
|
||||
}
|
||||
|
||||
// ParseDSAPrivateKey returns a DSA private key from its ASN.1 DER encoding, as
|
||||
// specified by the OpenSSL DSA man page.
|
||||
func ParseDSAPrivateKey(der []byte) (*dsa.PrivateKey, error) {
|
||||
|
@ -795,8 +872,8 @@ func ParseDSAPrivateKey(der []byte) (*dsa.PrivateKey, error) {
|
|||
P *big.Int
|
||||
Q *big.Int
|
||||
G *big.Int
|
||||
Priv *big.Int
|
||||
Pub *big.Int
|
||||
Priv *big.Int
|
||||
}
|
||||
rest, err := asn1.Unmarshal(der, &k)
|
||||
if err != nil {
|
||||
|
@ -813,15 +890,15 @@ func ParseDSAPrivateKey(der []byte) (*dsa.PrivateKey, error) {
|
|||
Q: k.Q,
|
||||
G: k.G,
|
||||
},
|
||||
Y: k.Priv,
|
||||
Y: k.Pub,
|
||||
},
|
||||
X: k.Pub,
|
||||
X: k.Priv,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Implemented based on the documentation at
|
||||
// https://github.com/openssh/openssh-portable/blob/master/PROTOCOL.key
|
||||
func parseOpenSSHPrivateKey(key []byte) (*ed25519.PrivateKey, error) {
|
||||
func parseOpenSSHPrivateKey(key []byte) (crypto.PrivateKey, error) {
|
||||
magic := append([]byte("openssh-key-v1"), 0)
|
||||
if !bytes.Equal(magic, key[0:len(magic)]) {
|
||||
return nil, errors.New("ssh: invalid openssh private key format")
|
||||
|
@ -841,14 +918,15 @@ func parseOpenSSHPrivateKey(key []byte) (*ed25519.PrivateKey, error) {
|
|||
return nil, err
|
||||
}
|
||||
|
||||
if w.KdfName != "none" || w.CipherName != "none" {
|
||||
return nil, errors.New("ssh: cannot decode encrypted private keys")
|
||||
}
|
||||
|
||||
pk1 := struct {
|
||||
Check1 uint32
|
||||
Check2 uint32
|
||||
Keytype string
|
||||
Pub []byte
|
||||
Priv []byte
|
||||
Comment string
|
||||
Pad []byte `ssh:"rest"`
|
||||
Rest []byte `ssh:"rest"`
|
||||
}{}
|
||||
|
||||
if err := Unmarshal(w.PrivKeyBlock, &pk1); err != nil {
|
||||
|
@ -859,22 +937,95 @@ func parseOpenSSHPrivateKey(key []byte) (*ed25519.PrivateKey, error) {
|
|||
return nil, errors.New("ssh: checkint mismatch")
|
||||
}
|
||||
|
||||
// we only handle ed25519 keys currently
|
||||
if pk1.Keytype != KeyAlgoED25519 {
|
||||
// we only handle ed25519 and rsa keys currently
|
||||
switch pk1.Keytype {
|
||||
case KeyAlgoRSA:
|
||||
// https://github.com/openssh/openssh-portable/blob/master/sshkey.c#L2760-L2773
|
||||
key := struct {
|
||||
N *big.Int
|
||||
E *big.Int
|
||||
D *big.Int
|
||||
Iqmp *big.Int
|
||||
P *big.Int
|
||||
Q *big.Int
|
||||
Comment string
|
||||
Pad []byte `ssh:"rest"`
|
||||
}{}
|
||||
|
||||
if err := Unmarshal(pk1.Rest, &key); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
for i, b := range key.Pad {
|
||||
if int(b) != i+1 {
|
||||
return nil, errors.New("ssh: padding not as expected")
|
||||
}
|
||||
}
|
||||
|
||||
pk := &rsa.PrivateKey{
|
||||
PublicKey: rsa.PublicKey{
|
||||
N: key.N,
|
||||
E: int(key.E.Int64()),
|
||||
},
|
||||
D: key.D,
|
||||
Primes: []*big.Int{key.P, key.Q},
|
||||
}
|
||||
|
||||
if err := pk.Validate(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
pk.Precompute()
|
||||
|
||||
return pk, nil
|
||||
case KeyAlgoED25519:
|
||||
key := struct {
|
||||
Pub []byte
|
||||
Priv []byte
|
||||
Comment string
|
||||
Pad []byte `ssh:"rest"`
|
||||
}{}
|
||||
|
||||
if err := Unmarshal(pk1.Rest, &key); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(key.Priv) != ed25519.PrivateKeySize {
|
||||
return nil, errors.New("ssh: private key unexpected length")
|
||||
}
|
||||
|
||||
for i, b := range key.Pad {
|
||||
if int(b) != i+1 {
|
||||
return nil, errors.New("ssh: padding not as expected")
|
||||
}
|
||||
}
|
||||
|
||||
pk := ed25519.PrivateKey(make([]byte, ed25519.PrivateKeySize))
|
||||
copy(pk, key.Priv)
|
||||
return &pk, nil
|
||||
default:
|
||||
return nil, errors.New("ssh: unhandled key type")
|
||||
}
|
||||
|
||||
for i, b := range pk1.Pad {
|
||||
if int(b) != i+1 {
|
||||
return nil, errors.New("ssh: padding not as expected")
|
||||
}
|
||||
}
|
||||
|
||||
if len(pk1.Priv) != ed25519.PrivateKeySize {
|
||||
return nil, errors.New("ssh: private key unexpected length")
|
||||
}
|
||||
|
||||
pk := ed25519.PrivateKey(make([]byte, ed25519.PrivateKeySize))
|
||||
copy(pk, pk1.Priv)
|
||||
return &pk, nil
|
||||
}
|
||||
|
||||
// FingerprintLegacyMD5 returns the user presentation of the key's
|
||||
// fingerprint as described by RFC 4716 section 4.
|
||||
func FingerprintLegacyMD5(pubKey PublicKey) string {
|
||||
md5sum := md5.Sum(pubKey.Marshal())
|
||||
hexarray := make([]string, len(md5sum))
|
||||
for i, c := range md5sum {
|
||||
hexarray[i] = hex.EncodeToString([]byte{c})
|
||||
}
|
||||
return strings.Join(hexarray, ":")
|
||||
}
|
||||
|
||||
// FingerprintSHA256 returns the user presentation of the key's
|
||||
// fingerprint as unpadded base64 encoded sha256 hash.
|
||||
// This format was introduced from OpenSSH 6.8.
|
||||
// https://www.openssh.com/txt/release-6.8
|
||||
// https://tools.ietf.org/html/rfc4648#section-3.2 (unpadded base64 encoding)
|
||||
func FingerprintSHA256(pubKey PublicKey) string {
|
||||
sha256sum := sha256.Sum256(pubKey.Marshal())
|
||||
hash := base64.RawStdEncoding.EncodeToString(sha256sum[:])
|
||||
return "SHA256:" + hash
|
||||
}
|
||||
|
|
|
@ -15,6 +15,7 @@ import (
|
|||
|
||||
type macMode struct {
|
||||
keySize int
|
||||
etm bool
|
||||
new func(key []byte) hash.Hash
|
||||
}
|
||||
|
||||
|
@ -45,13 +46,16 @@ func (t truncatingMAC) Size() int {
|
|||
func (t truncatingMAC) BlockSize() int { return t.hmac.BlockSize() }
|
||||
|
||||
var macModes = map[string]*macMode{
|
||||
"hmac-sha2-256": {32, func(key []byte) hash.Hash {
|
||||
"hmac-sha2-256-etm@openssh.com": {32, true, func(key []byte) hash.Hash {
|
||||
return hmac.New(sha256.New, key)
|
||||
}},
|
||||
"hmac-sha1": {20, func(key []byte) hash.Hash {
|
||||
"hmac-sha2-256": {32, false, func(key []byte) hash.Hash {
|
||||
return hmac.New(sha256.New, key)
|
||||
}},
|
||||
"hmac-sha1": {20, false, func(key []byte) hash.Hash {
|
||||
return hmac.New(sha1.New, key)
|
||||
}},
|
||||
"hmac-sha1-96": {20, func(key []byte) hash.Hash {
|
||||
"hmac-sha1-96": {20, false, func(key []byte) hash.Hash {
|
||||
return truncatingMAC{12, hmac.New(sha1.New, key)}
|
||||
}},
|
||||
}
|
||||
|
|
|
@ -23,10 +23,6 @@ const (
|
|||
msgUnimplemented = 3
|
||||
msgDebug = 4
|
||||
msgNewKeys = 21
|
||||
|
||||
// Standard authentication messages
|
||||
msgUserAuthSuccess = 52
|
||||
msgUserAuthBanner = 53
|
||||
)
|
||||
|
||||
// SSH messages:
|
||||
|
@ -137,6 +133,18 @@ type userAuthFailureMsg struct {
|
|||
PartialSuccess bool
|
||||
}
|
||||
|
||||
// See RFC 4252, section 5.1
|
||||
const msgUserAuthSuccess = 52
|
||||
|
||||
// See RFC 4252, section 5.4
|
||||
const msgUserAuthBanner = 53
|
||||
|
||||
type userAuthBannerMsg struct {
|
||||
Message string `sshtype:"53"`
|
||||
// unused, but required to allow message parsing
|
||||
Language string
|
||||
}
|
||||
|
||||
// See RFC 4256, section 3.2
|
||||
const msgUserAuthInfoRequest = 60
|
||||
const msgUserAuthInfoResponse = 61
|
||||
|
|
|
@ -116,9 +116,9 @@ func (m *mux) Wait() error {
|
|||
func newMux(p packetConn) *mux {
|
||||
m := &mux{
|
||||
conn: p,
|
||||
incomingChannels: make(chan NewChannel, 16),
|
||||
incomingChannels: make(chan NewChannel, chanSize),
|
||||
globalResponses: make(chan interface{}, 1),
|
||||
incomingRequests: make(chan *Request, 16),
|
||||
incomingRequests: make(chan *Request, chanSize),
|
||||
errCond: newCond(),
|
||||
}
|
||||
if debugMux {
|
||||
|
|
|
@ -10,26 +10,38 @@ import (
|
|||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// The Permissions type holds fine-grained permissions that are
|
||||
// specific to a user or a specific authentication method for a
|
||||
// user. Permissions, except for "source-address", must be enforced in
|
||||
// the server application layer, after successful authentication. The
|
||||
// Permissions are passed on in ServerConn so a server implementation
|
||||
// can honor them.
|
||||
// specific to a user or a specific authentication method for a user.
|
||||
// The Permissions value for a successful authentication attempt is
|
||||
// available in ServerConn, so it can be used to pass information from
|
||||
// the user-authentication phase to the application layer.
|
||||
type Permissions struct {
|
||||
// Critical options restrict default permissions. Common
|
||||
// restrictions are "source-address" and "force-command". If
|
||||
// the server cannot enforce the restriction, or does not
|
||||
// recognize it, the user should not authenticate.
|
||||
// CriticalOptions indicate restrictions to the default
|
||||
// permissions, and are typically used in conjunction with
|
||||
// user certificates. The standard for SSH certificates
|
||||
// defines "force-command" (only allow the given command to
|
||||
// execute) and "source-address" (only allow connections from
|
||||
// the given address). The SSH package currently only enforces
|
||||
// the "source-address" critical option. It is up to server
|
||||
// implementations to enforce other critical options, such as
|
||||
// "force-command", by checking them after the SSH handshake
|
||||
// is successful. In general, SSH servers should reject
|
||||
// connections that specify critical options that are unknown
|
||||
// or not supported.
|
||||
CriticalOptions map[string]string
|
||||
|
||||
// Extensions are extra functionality that the server may
|
||||
// offer on authenticated connections. Common extensions are
|
||||
// "permit-agent-forwarding", "permit-X11-forwarding". Lack of
|
||||
// support for an extension does not preclude authenticating a
|
||||
// user.
|
||||
// offer on authenticated connections. Lack of support for an
|
||||
// extension does not preclude authenticating a user. Common
|
||||
// extensions are "permit-agent-forwarding",
|
||||
// "permit-X11-forwarding". The Go SSH library currently does
|
||||
// not act on any extension, and it is up to server
|
||||
// implementations to honor them. Extensions can be used to
|
||||
// pass data from the authentication callbacks to the server
|
||||
// application layer.
|
||||
Extensions map[string]string
|
||||
}
|
||||
|
||||
|
@ -44,13 +56,24 @@ type ServerConfig struct {
|
|||
// authenticating.
|
||||
NoClientAuth bool
|
||||
|
||||
// MaxAuthTries specifies the maximum number of authentication attempts
|
||||
// permitted per connection. If set to a negative number, the number of
|
||||
// attempts are unlimited. If set to zero, the number of attempts are limited
|
||||
// to 6.
|
||||
MaxAuthTries int
|
||||
|
||||
// PasswordCallback, if non-nil, is called when a user
|
||||
// attempts to authenticate using a password.
|
||||
PasswordCallback func(conn ConnMetadata, password []byte) (*Permissions, error)
|
||||
|
||||
// PublicKeyCallback, if non-nil, is called when a client attempts public
|
||||
// key authentication. It must return true if the given public key is
|
||||
// valid for the given user. For example, see CertChecker.Authenticate.
|
||||
// PublicKeyCallback, if non-nil, is called when a client
|
||||
// offers a public key for authentication. It must return a nil error
|
||||
// if the given public key can be used to authenticate the
|
||||
// given user. For example, see CertChecker.Authenticate. A
|
||||
// call to this function does not guarantee that the key
|
||||
// offered is in fact used to authenticate. To record any data
|
||||
// depending on the public key, store it inside a
|
||||
// Permissions.Extensions entry.
|
||||
PublicKeyCallback func(conn ConnMetadata, key PublicKey) (*Permissions, error)
|
||||
|
||||
// KeyboardInteractiveCallback, if non-nil, is called when
|
||||
|
@ -72,6 +95,10 @@ type ServerConfig struct {
|
|||
// Note that RFC 4253 section 4.2 requires that this string start with
|
||||
// "SSH-2.0-".
|
||||
ServerVersion string
|
||||
|
||||
// BannerCallback, if present, is called and the return string is sent to
|
||||
// the client after key exchange completed but before authentication.
|
||||
BannerCallback func(conn ConnMetadata) string
|
||||
}
|
||||
|
||||
// AddHostKey adds a private key as a host key. If an existing host
|
||||
|
@ -142,6 +169,10 @@ type ServerConn struct {
|
|||
func NewServerConn(c net.Conn, config *ServerConfig) (*ServerConn, <-chan NewChannel, <-chan *Request, error) {
|
||||
fullConf := *config
|
||||
fullConf.SetDefaults()
|
||||
if fullConf.MaxAuthTries == 0 {
|
||||
fullConf.MaxAuthTries = 6
|
||||
}
|
||||
|
||||
s := &connection{
|
||||
sshConn: sshConn{conn: c},
|
||||
}
|
||||
|
@ -188,7 +219,7 @@ func (s *connection) serverHandshake(config *ServerConfig) (*Permissions, error)
|
|||
tr := newTransport(s.sshConn.conn, config.Rand, false /* not client */)
|
||||
s.transport = newServerTransport(tr, s.clientVersion, s.serverVersion, config)
|
||||
|
||||
if err := s.transport.requestInitialKeyChange(); err != nil {
|
||||
if err := s.transport.waitSession(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
|
@ -231,7 +262,7 @@ func isAcceptableAlgo(algo string) bool {
|
|||
return false
|
||||
}
|
||||
|
||||
func checkSourceAddress(addr net.Addr, sourceAddr string) error {
|
||||
func checkSourceAddress(addr net.Addr, sourceAddrs string) error {
|
||||
if addr == nil {
|
||||
return errors.New("ssh: no address known for client, but source-address match required")
|
||||
}
|
||||
|
@ -241,33 +272,71 @@ func checkSourceAddress(addr net.Addr, sourceAddr string) error {
|
|||
return fmt.Errorf("ssh: remote address %v is not an TCP address when checking source-address match", addr)
|
||||
}
|
||||
|
||||
if allowedIP := net.ParseIP(sourceAddr); allowedIP != nil {
|
||||
if bytes.Equal(allowedIP, tcpAddr.IP) {
|
||||
return nil
|
||||
}
|
||||
} else {
|
||||
_, ipNet, err := net.ParseCIDR(sourceAddr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("ssh: error parsing source-address restriction %q: %v", sourceAddr, err)
|
||||
}
|
||||
for _, sourceAddr := range strings.Split(sourceAddrs, ",") {
|
||||
if allowedIP := net.ParseIP(sourceAddr); allowedIP != nil {
|
||||
if allowedIP.Equal(tcpAddr.IP) {
|
||||
return nil
|
||||
}
|
||||
} else {
|
||||
_, ipNet, err := net.ParseCIDR(sourceAddr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("ssh: error parsing source-address restriction %q: %v", sourceAddr, err)
|
||||
}
|
||||
|
||||
if ipNet.Contains(tcpAddr.IP) {
|
||||
return nil
|
||||
if ipNet.Contains(tcpAddr.IP) {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return fmt.Errorf("ssh: remote address %v is not allowed because of source-address restriction", addr)
|
||||
}
|
||||
|
||||
// ServerAuthError implements the error interface. It appends any authentication
|
||||
// errors that may occur, and is returned if all of the authentication methods
|
||||
// provided by the user failed to authenticate.
|
||||
type ServerAuthError struct {
|
||||
// Errors contains authentication errors returned by the authentication
|
||||
// callback methods.
|
||||
Errors []error
|
||||
}
|
||||
|
||||
func (l ServerAuthError) Error() string {
|
||||
var errs []string
|
||||
for _, err := range l.Errors {
|
||||
errs = append(errs, err.Error())
|
||||
}
|
||||
return "[" + strings.Join(errs, ", ") + "]"
|
||||
}
|
||||
|
||||
func (s *connection) serverAuthenticate(config *ServerConfig) (*Permissions, error) {
|
||||
var err error
|
||||
sessionID := s.transport.getSessionID()
|
||||
var cache pubKeyCache
|
||||
var perms *Permissions
|
||||
|
||||
authFailures := 0
|
||||
var authErrs []error
|
||||
|
||||
userAuthLoop:
|
||||
for {
|
||||
if authFailures >= config.MaxAuthTries && config.MaxAuthTries > 0 {
|
||||
discMsg := &disconnectMsg{
|
||||
Reason: 2,
|
||||
Message: "too many authentication failures",
|
||||
}
|
||||
|
||||
if err := s.transport.writePacket(Marshal(discMsg)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return nil, discMsg
|
||||
}
|
||||
|
||||
var userAuthReq userAuthRequestMsg
|
||||
if packet, err := s.transport.readPacket(); err != nil {
|
||||
if err == io.EOF {
|
||||
return nil, &ServerAuthError{Errors: authErrs}
|
||||
}
|
||||
return nil, err
|
||||
} else if err = Unmarshal(packet, &userAuthReq); err != nil {
|
||||
return nil, err
|
||||
|
@ -278,6 +347,19 @@ userAuthLoop:
|
|||
}
|
||||
|
||||
s.user = userAuthReq.User
|
||||
|
||||
if authFailures == 0 && config.BannerCallback != nil {
|
||||
msg := config.BannerCallback(s)
|
||||
if msg != "" {
|
||||
bannerMsg := &userAuthBannerMsg{
|
||||
Message: msg,
|
||||
}
|
||||
if err := s.transport.writePacket(Marshal(bannerMsg)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
perms = nil
|
||||
authErr := errors.New("no auth passed yet")
|
||||
|
||||
|
@ -286,6 +368,11 @@ userAuthLoop:
|
|||
if config.NoClientAuth {
|
||||
authErr = nil
|
||||
}
|
||||
|
||||
// allow initial attempt of 'none' without penalty
|
||||
if authFailures == 0 {
|
||||
authFailures--
|
||||
}
|
||||
case "password":
|
||||
if config.PasswordCallback == nil {
|
||||
authErr = errors.New("ssh: password auth not configured")
|
||||
|
@ -357,6 +444,7 @@ userAuthLoop:
|
|||
if isQuery {
|
||||
// The client can query if the given public key
|
||||
// would be okay.
|
||||
|
||||
if len(payload) > 0 {
|
||||
return nil, parseError(msgUserAuthRequest)
|
||||
}
|
||||
|
@ -385,7 +473,7 @@ userAuthLoop:
|
|||
if !isAcceptableAlgo(sig.Format) {
|
||||
break
|
||||
}
|
||||
signedData := buildDataSignedForAuth(s.transport.getSessionID(), userAuthReq, algoBytes, pubKeyData)
|
||||
signedData := buildDataSignedForAuth(sessionID, userAuthReq, algoBytes, pubKeyData)
|
||||
|
||||
if err := pubKey.Verify(signedData, sig); err != nil {
|
||||
return nil, err
|
||||
|
@ -398,6 +486,8 @@ userAuthLoop:
|
|||
authErr = fmt.Errorf("ssh: unknown method %q", userAuthReq.Method)
|
||||
}
|
||||
|
||||
authErrs = append(authErrs, authErr)
|
||||
|
||||
if config.AuthLogCallback != nil {
|
||||
config.AuthLogCallback(s, userAuthReq.Method, authErr)
|
||||
}
|
||||
|
@ -406,6 +496,8 @@ userAuthLoop:
|
|||
break userAuthLoop
|
||||
}
|
||||
|
||||
authFailures++
|
||||
|
||||
var failureMsg userAuthFailureMsg
|
||||
if config.PasswordCallback != nil {
|
||||
failureMsg.Methods = append(failureMsg.Methods, "password")
|
||||
|
@ -421,12 +513,12 @@ userAuthLoop:
|
|||
return nil, errors.New("ssh: no authentication methods configured but NoClientAuth is also false")
|
||||
}
|
||||
|
||||
if err = s.transport.writePacket(Marshal(&failureMsg)); err != nil {
|
||||
if err := s.transport.writePacket(Marshal(&failureMsg)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
if err = s.transport.writePacket([]byte{msgUserAuthSuccess}); err != nil {
|
||||
if err := s.transport.writePacket([]byte{msgUserAuthSuccess}); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return perms, nil
|
||||
|
|
|
@ -231,6 +231,26 @@ func (s *Session) RequestSubsystem(subsystem string) error {
|
|||
return err
|
||||
}
|
||||
|
||||
// RFC 4254 Section 6.7.
|
||||
type ptyWindowChangeMsg struct {
|
||||
Columns uint32
|
||||
Rows uint32
|
||||
Width uint32
|
||||
Height uint32
|
||||
}
|
||||
|
||||
// WindowChange informs the remote host about a terminal window dimension change to h rows and w columns.
|
||||
func (s *Session) WindowChange(h, w int) error {
|
||||
req := ptyWindowChangeMsg{
|
||||
Columns: uint32(w),
|
||||
Rows: uint32(h),
|
||||
Width: uint32(w * 8),
|
||||
Height: uint32(h * 8),
|
||||
}
|
||||
_, err := s.ch.SendRequest("window-change", false, Marshal(&req))
|
||||
return err
|
||||
}
|
||||
|
||||
// RFC 4254 Section 6.9.
|
||||
type signalMsg struct {
|
||||
Signal string
|
||||
|
|
|
@ -0,0 +1,115 @@
|
|||
package ssh
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
"net"
|
||||
)
|
||||
|
||||
// streamLocalChannelOpenDirectMsg is a struct used for SSH_MSG_CHANNEL_OPEN message
|
||||
// with "direct-streamlocal@openssh.com" string.
|
||||
//
|
||||
// See openssh-portable/PROTOCOL, section 2.4. connection: Unix domain socket forwarding
|
||||
// https://github.com/openssh/openssh-portable/blob/master/PROTOCOL#L235
|
||||
type streamLocalChannelOpenDirectMsg struct {
|
||||
socketPath string
|
||||
reserved0 string
|
||||
reserved1 uint32
|
||||
}
|
||||
|
||||
// forwardedStreamLocalPayload is a struct used for SSH_MSG_CHANNEL_OPEN message
|
||||
// with "forwarded-streamlocal@openssh.com" string.
|
||||
type forwardedStreamLocalPayload struct {
|
||||
SocketPath string
|
||||
Reserved0 string
|
||||
}
|
||||
|
||||
// streamLocalChannelForwardMsg is a struct used for SSH2_MSG_GLOBAL_REQUEST message
|
||||
// with "streamlocal-forward@openssh.com"/"cancel-streamlocal-forward@openssh.com" string.
|
||||
type streamLocalChannelForwardMsg struct {
|
||||
socketPath string
|
||||
}
|
||||
|
||||
// ListenUnix is similar to ListenTCP but uses a Unix domain socket.
|
||||
func (c *Client) ListenUnix(socketPath string) (net.Listener, error) {
|
||||
m := streamLocalChannelForwardMsg{
|
||||
socketPath,
|
||||
}
|
||||
// send message
|
||||
ok, _, err := c.SendRequest("streamlocal-forward@openssh.com", true, Marshal(&m))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !ok {
|
||||
return nil, errors.New("ssh: streamlocal-forward@openssh.com request denied by peer")
|
||||
}
|
||||
ch := c.forwards.add(&net.UnixAddr{Name: socketPath, Net: "unix"})
|
||||
|
||||
return &unixListener{socketPath, c, ch}, nil
|
||||
}
|
||||
|
||||
func (c *Client) dialStreamLocal(socketPath string) (Channel, error) {
|
||||
msg := streamLocalChannelOpenDirectMsg{
|
||||
socketPath: socketPath,
|
||||
}
|
||||
ch, in, err := c.OpenChannel("direct-streamlocal@openssh.com", Marshal(&msg))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
go DiscardRequests(in)
|
||||
return ch, err
|
||||
}
|
||||
|
||||
type unixListener struct {
|
||||
socketPath string
|
||||
|
||||
conn *Client
|
||||
in <-chan forward
|
||||
}
|
||||
|
||||
// Accept waits for and returns the next connection to the listener.
|
||||
func (l *unixListener) Accept() (net.Conn, error) {
|
||||
s, ok := <-l.in
|
||||
if !ok {
|
||||
return nil, io.EOF
|
||||
}
|
||||
ch, incoming, err := s.newCh.Accept()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
go DiscardRequests(incoming)
|
||||
|
||||
return &chanConn{
|
||||
Channel: ch,
|
||||
laddr: &net.UnixAddr{
|
||||
Name: l.socketPath,
|
||||
Net: "unix",
|
||||
},
|
||||
raddr: &net.UnixAddr{
|
||||
Name: "@",
|
||||
Net: "unix",
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Close closes the listener.
|
||||
func (l *unixListener) Close() error {
|
||||
// this also closes the listener.
|
||||
l.conn.forwards.remove(&net.UnixAddr{Name: l.socketPath, Net: "unix"})
|
||||
m := streamLocalChannelForwardMsg{
|
||||
l.socketPath,
|
||||
}
|
||||
ok, _, err := l.conn.SendRequest("cancel-streamlocal-forward@openssh.com", true, Marshal(&m))
|
||||
if err == nil && !ok {
|
||||
err = errors.New("ssh: cancel-streamlocal-forward@openssh.com failed")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// Addr returns the listener's network address.
|
||||
func (l *unixListener) Addr() net.Addr {
|
||||
return &net.UnixAddr{
|
||||
Name: l.socketPath,
|
||||
Net: "unix",
|
||||
}
|
||||
}
|
|
@ -20,12 +20,20 @@ import (
|
|||
// addr. Incoming connections will be available by calling Accept on
|
||||
// the returned net.Listener. The listener must be serviced, or the
|
||||
// SSH connection may hang.
|
||||
// N must be "tcp", "tcp4", "tcp6", or "unix".
|
||||
func (c *Client) Listen(n, addr string) (net.Listener, error) {
|
||||
laddr, err := net.ResolveTCPAddr(n, addr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
switch n {
|
||||
case "tcp", "tcp4", "tcp6":
|
||||
laddr, err := net.ResolveTCPAddr(n, addr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return c.ListenTCP(laddr)
|
||||
case "unix":
|
||||
return c.ListenUnix(addr)
|
||||
default:
|
||||
return nil, fmt.Errorf("ssh: unsupported protocol: %s", n)
|
||||
}
|
||||
return c.ListenTCP(laddr)
|
||||
}
|
||||
|
||||
// Automatic port allocation is broken with OpenSSH before 6.0. See
|
||||
|
@ -116,7 +124,7 @@ func (c *Client) ListenTCP(laddr *net.TCPAddr) (net.Listener, error) {
|
|||
}
|
||||
|
||||
// Register this forward, using the port number we obtained.
|
||||
ch := c.forwards.add(*laddr)
|
||||
ch := c.forwards.add(laddr)
|
||||
|
||||
return &tcpListener{laddr, c, ch}, nil
|
||||
}
|
||||
|
@ -131,7 +139,7 @@ type forwardList struct {
|
|||
// forwardEntry represents an established mapping of a laddr on a
|
||||
// remote ssh server to a channel connected to a tcpListener.
|
||||
type forwardEntry struct {
|
||||
laddr net.TCPAddr
|
||||
laddr net.Addr
|
||||
c chan forward
|
||||
}
|
||||
|
||||
|
@ -139,16 +147,16 @@ type forwardEntry struct {
|
|||
// arguments to add/remove/lookup should be address as specified in
|
||||
// the original forward-request.
|
||||
type forward struct {
|
||||
newCh NewChannel // the ssh client channel underlying this forward
|
||||
raddr *net.TCPAddr // the raddr of the incoming connection
|
||||
newCh NewChannel // the ssh client channel underlying this forward
|
||||
raddr net.Addr // the raddr of the incoming connection
|
||||
}
|
||||
|
||||
func (l *forwardList) add(addr net.TCPAddr) chan forward {
|
||||
func (l *forwardList) add(addr net.Addr) chan forward {
|
||||
l.Lock()
|
||||
defer l.Unlock()
|
||||
f := forwardEntry{
|
||||
addr,
|
||||
make(chan forward, 1),
|
||||
laddr: addr,
|
||||
c: make(chan forward, 1),
|
||||
}
|
||||
l.entries = append(l.entries, f)
|
||||
return f.c
|
||||
|
@ -176,44 +184,69 @@ func parseTCPAddr(addr string, port uint32) (*net.TCPAddr, error) {
|
|||
|
||||
func (l *forwardList) handleChannels(in <-chan NewChannel) {
|
||||
for ch := range in {
|
||||
var payload forwardedTCPPayload
|
||||
if err := Unmarshal(ch.ExtraData(), &payload); err != nil {
|
||||
ch.Reject(ConnectionFailed, "could not parse forwarded-tcpip payload: "+err.Error())
|
||||
continue
|
||||
}
|
||||
var (
|
||||
laddr net.Addr
|
||||
raddr net.Addr
|
||||
err error
|
||||
)
|
||||
switch channelType := ch.ChannelType(); channelType {
|
||||
case "forwarded-tcpip":
|
||||
var payload forwardedTCPPayload
|
||||
if err = Unmarshal(ch.ExtraData(), &payload); err != nil {
|
||||
ch.Reject(ConnectionFailed, "could not parse forwarded-tcpip payload: "+err.Error())
|
||||
continue
|
||||
}
|
||||
|
||||
// RFC 4254 section 7.2 specifies that incoming
|
||||
// addresses should list the address, in string
|
||||
// format. It is implied that this should be an IP
|
||||
// address, as it would be impossible to connect to it
|
||||
// otherwise.
|
||||
laddr, err := parseTCPAddr(payload.Addr, payload.Port)
|
||||
if err != nil {
|
||||
ch.Reject(ConnectionFailed, err.Error())
|
||||
continue
|
||||
}
|
||||
raddr, err := parseTCPAddr(payload.OriginAddr, payload.OriginPort)
|
||||
if err != nil {
|
||||
ch.Reject(ConnectionFailed, err.Error())
|
||||
continue
|
||||
}
|
||||
// RFC 4254 section 7.2 specifies that incoming
|
||||
// addresses should list the address, in string
|
||||
// format. It is implied that this should be an IP
|
||||
// address, as it would be impossible to connect to it
|
||||
// otherwise.
|
||||
laddr, err = parseTCPAddr(payload.Addr, payload.Port)
|
||||
if err != nil {
|
||||
ch.Reject(ConnectionFailed, err.Error())
|
||||
continue
|
||||
}
|
||||
raddr, err = parseTCPAddr(payload.OriginAddr, payload.OriginPort)
|
||||
if err != nil {
|
||||
ch.Reject(ConnectionFailed, err.Error())
|
||||
continue
|
||||
}
|
||||
|
||||
if ok := l.forward(*laddr, *raddr, ch); !ok {
|
||||
case "forwarded-streamlocal@openssh.com":
|
||||
var payload forwardedStreamLocalPayload
|
||||
if err = Unmarshal(ch.ExtraData(), &payload); err != nil {
|
||||
ch.Reject(ConnectionFailed, "could not parse forwarded-streamlocal@openssh.com payload: "+err.Error())
|
||||
continue
|
||||
}
|
||||
laddr = &net.UnixAddr{
|
||||
Name: payload.SocketPath,
|
||||
Net: "unix",
|
||||
}
|
||||
raddr = &net.UnixAddr{
|
||||
Name: "@",
|
||||
Net: "unix",
|
||||
}
|
||||
default:
|
||||
panic(fmt.Errorf("ssh: unknown channel type %s", channelType))
|
||||
}
|
||||
if ok := l.forward(laddr, raddr, ch); !ok {
|
||||
// Section 7.2, implementations MUST reject spurious incoming
|
||||
// connections.
|
||||
ch.Reject(Prohibited, "no forward for address")
|
||||
continue
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
// remove removes the forward entry, and the channel feeding its
|
||||
// listener.
|
||||
func (l *forwardList) remove(addr net.TCPAddr) {
|
||||
func (l *forwardList) remove(addr net.Addr) {
|
||||
l.Lock()
|
||||
defer l.Unlock()
|
||||
for i, f := range l.entries {
|
||||
if addr.IP.Equal(f.laddr.IP) && addr.Port == f.laddr.Port {
|
||||
if addr.Network() == f.laddr.Network() && addr.String() == f.laddr.String() {
|
||||
l.entries = append(l.entries[:i], l.entries[i+1:]...)
|
||||
close(f.c)
|
||||
return
|
||||
|
@ -231,12 +264,12 @@ func (l *forwardList) closeAll() {
|
|||
l.entries = nil
|
||||
}
|
||||
|
||||
func (l *forwardList) forward(laddr, raddr net.TCPAddr, ch NewChannel) bool {
|
||||
func (l *forwardList) forward(laddr, raddr net.Addr, ch NewChannel) bool {
|
||||
l.Lock()
|
||||
defer l.Unlock()
|
||||
for _, f := range l.entries {
|
||||
if laddr.IP.Equal(f.laddr.IP) && laddr.Port == f.laddr.Port {
|
||||
f.c <- forward{ch, &raddr}
|
||||
if laddr.Network() == f.laddr.Network() && laddr.String() == f.laddr.String() {
|
||||
f.c <- forward{newCh: ch, raddr: raddr}
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
@ -262,7 +295,7 @@ func (l *tcpListener) Accept() (net.Conn, error) {
|
|||
}
|
||||
go DiscardRequests(incoming)
|
||||
|
||||
return &tcpChanConn{
|
||||
return &chanConn{
|
||||
Channel: ch,
|
||||
laddr: l.laddr,
|
||||
raddr: s.raddr,
|
||||
|
@ -277,7 +310,7 @@ func (l *tcpListener) Close() error {
|
|||
}
|
||||
|
||||
// this also closes the listener.
|
||||
l.conn.forwards.remove(*l.laddr)
|
||||
l.conn.forwards.remove(l.laddr)
|
||||
ok, _, err := l.conn.SendRequest("cancel-tcpip-forward", true, Marshal(&m))
|
||||
if err == nil && !ok {
|
||||
err = errors.New("ssh: cancel-tcpip-forward failed")
|
||||
|
@ -293,29 +326,52 @@ func (l *tcpListener) Addr() net.Addr {
|
|||
// Dial initiates a connection to the addr from the remote host.
|
||||
// The resulting connection has a zero LocalAddr() and RemoteAddr().
|
||||
func (c *Client) Dial(n, addr string) (net.Conn, error) {
|
||||
// Parse the address into host and numeric port.
|
||||
host, portString, err := net.SplitHostPort(addr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
var ch Channel
|
||||
switch n {
|
||||
case "tcp", "tcp4", "tcp6":
|
||||
// Parse the address into host and numeric port.
|
||||
host, portString, err := net.SplitHostPort(addr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
port, err := strconv.ParseUint(portString, 10, 16)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ch, err = c.dial(net.IPv4zero.String(), 0, host, int(port))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Use a zero address for local and remote address.
|
||||
zeroAddr := &net.TCPAddr{
|
||||
IP: net.IPv4zero,
|
||||
Port: 0,
|
||||
}
|
||||
return &chanConn{
|
||||
Channel: ch,
|
||||
laddr: zeroAddr,
|
||||
raddr: zeroAddr,
|
||||
}, nil
|
||||
case "unix":
|
||||
var err error
|
||||
ch, err = c.dialStreamLocal(addr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &chanConn{
|
||||
Channel: ch,
|
||||
laddr: &net.UnixAddr{
|
||||
Name: "@",
|
||||
Net: "unix",
|
||||
},
|
||||
raddr: &net.UnixAddr{
|
||||
Name: addr,
|
||||
Net: "unix",
|
||||
},
|
||||
}, nil
|
||||
default:
|
||||
return nil, fmt.Errorf("ssh: unsupported protocol: %s", n)
|
||||
}
|
||||
port, err := strconv.ParseUint(portString, 10, 16)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Use a zero address for local and remote address.
|
||||
zeroAddr := &net.TCPAddr{
|
||||
IP: net.IPv4zero,
|
||||
Port: 0,
|
||||
}
|
||||
ch, err := c.dial(net.IPv4zero.String(), 0, host, int(port))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &tcpChanConn{
|
||||
Channel: ch,
|
||||
laddr: zeroAddr,
|
||||
raddr: zeroAddr,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// DialTCP connects to the remote address raddr on the network net,
|
||||
|
@ -332,7 +388,7 @@ func (c *Client) DialTCP(n string, laddr, raddr *net.TCPAddr) (net.Conn, error)
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &tcpChanConn{
|
||||
return &chanConn{
|
||||
Channel: ch,
|
||||
laddr: laddr,
|
||||
raddr: raddr,
|
||||
|
@ -366,26 +422,26 @@ type tcpChan struct {
|
|||
Channel // the backing channel
|
||||
}
|
||||
|
||||
// tcpChanConn fulfills the net.Conn interface without
|
||||
// chanConn fulfills the net.Conn interface without
|
||||
// the tcpChan having to hold laddr or raddr directly.
|
||||
type tcpChanConn struct {
|
||||
type chanConn struct {
|
||||
Channel
|
||||
laddr, raddr net.Addr
|
||||
}
|
||||
|
||||
// LocalAddr returns the local network address.
|
||||
func (t *tcpChanConn) LocalAddr() net.Addr {
|
||||
func (t *chanConn) LocalAddr() net.Addr {
|
||||
return t.laddr
|
||||
}
|
||||
|
||||
// RemoteAddr returns the remote network address.
|
||||
func (t *tcpChanConn) RemoteAddr() net.Addr {
|
||||
func (t *chanConn) RemoteAddr() net.Addr {
|
||||
return t.raddr
|
||||
}
|
||||
|
||||
// SetDeadline sets the read and write deadlines associated
|
||||
// with the connection.
|
||||
func (t *tcpChanConn) SetDeadline(deadline time.Time) error {
|
||||
func (t *chanConn) SetDeadline(deadline time.Time) error {
|
||||
if err := t.SetReadDeadline(deadline); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -396,12 +452,14 @@ func (t *tcpChanConn) SetDeadline(deadline time.Time) error {
|
|||
// A zero value for t means Read will not time out.
|
||||
// After the deadline, the error from Read will implement net.Error
|
||||
// with Timeout() == true.
|
||||
func (t *tcpChanConn) SetReadDeadline(deadline time.Time) error {
|
||||
func (t *chanConn) SetReadDeadline(deadline time.Time) error {
|
||||
// for compatibility with previous version,
|
||||
// the error message contains "tcpChan"
|
||||
return errors.New("ssh: tcpChan: deadline not supported")
|
||||
}
|
||||
|
||||
// SetWriteDeadline exists to satisfy the net.Conn interface
|
||||
// but is not implemented by this type. It always returns an error.
|
||||
func (t *tcpChanConn) SetWriteDeadline(deadline time.Time) error {
|
||||
func (t *chanConn) SetWriteDeadline(deadline time.Time) error {
|
||||
return errors.New("ssh: tcpChan: deadline not supported")
|
||||
}
|
||||
|
|
|
@ -8,8 +8,13 @@ import (
|
|||
"bufio"
|
||||
"errors"
|
||||
"io"
|
||||
"log"
|
||||
)
|
||||
|
||||
// debugTransport if set, will print packet types as they go over the
|
||||
// wire. No message decoding is done, to minimize the impact on timing.
|
||||
const debugTransport = false
|
||||
|
||||
const (
|
||||
gcmCipherID = "aes128-gcm@openssh.com"
|
||||
aes128cbcID = "aes128-cbc"
|
||||
|
@ -22,7 +27,9 @@ type packetConn interface {
|
|||
// Encrypt and send a packet of data to the remote peer.
|
||||
writePacket(packet []byte) error
|
||||
|
||||
// Read a packet from the connection
|
||||
// Read a packet from the connection. The read is blocking,
|
||||
// i.e. if error is nil, then the returned byte slice is
|
||||
// always non-empty.
|
||||
readPacket() ([]byte, error)
|
||||
|
||||
// Close closes the write-side of the connection.
|
||||
|
@ -38,7 +45,7 @@ type transport struct {
|
|||
bufReader *bufio.Reader
|
||||
bufWriter *bufio.Writer
|
||||
rand io.Reader
|
||||
|
||||
isClient bool
|
||||
io.Closer
|
||||
}
|
||||
|
||||
|
@ -84,9 +91,38 @@ func (t *transport) prepareKeyChange(algs *algorithms, kexResult *kexResult) err
|
|||
return nil
|
||||
}
|
||||
|
||||
func (t *transport) printPacket(p []byte, write bool) {
|
||||
if len(p) == 0 {
|
||||
return
|
||||
}
|
||||
who := "server"
|
||||
if t.isClient {
|
||||
who = "client"
|
||||
}
|
||||
what := "read"
|
||||
if write {
|
||||
what = "write"
|
||||
}
|
||||
|
||||
log.Println(what, who, p[0])
|
||||
}
|
||||
|
||||
// Read and decrypt next packet.
|
||||
func (t *transport) readPacket() ([]byte, error) {
|
||||
return t.reader.readPacket(t.bufReader)
|
||||
func (t *transport) readPacket() (p []byte, err error) {
|
||||
for {
|
||||
p, err = t.reader.readPacket(t.bufReader)
|
||||
if err != nil {
|
||||
break
|
||||
}
|
||||
if len(p) == 0 || (p[0] != msgIgnore && p[0] != msgDebug) {
|
||||
break
|
||||
}
|
||||
}
|
||||
if debugTransport {
|
||||
t.printPacket(p, false)
|
||||
}
|
||||
|
||||
return p, err
|
||||
}
|
||||
|
||||
func (s *connectionState) readPacket(r *bufio.Reader) ([]byte, error) {
|
||||
|
@ -129,6 +165,9 @@ func (s *connectionState) readPacket(r *bufio.Reader) ([]byte, error) {
|
|||
}
|
||||
|
||||
func (t *transport) writePacket(packet []byte) error {
|
||||
if debugTransport {
|
||||
t.printPacket(packet, true)
|
||||
}
|
||||
return t.writer.writePacket(t.bufWriter, t.rand, packet)
|
||||
}
|
||||
|
||||
|
@ -169,6 +208,8 @@ func newTransport(rwc io.ReadWriteCloser, rand io.Reader, isClient bool) *transp
|
|||
},
|
||||
Closer: rwc,
|
||||
}
|
||||
t.isClient = isClient
|
||||
|
||||
if isClient {
|
||||
t.reader.dir = serverKeys
|
||||
t.writer.dir = clientKeys
|
||||
|
@ -213,7 +254,7 @@ func newPacketCipher(d direction, algs directionAlgorithms, kex *kexResult) (pac
|
|||
iv, key, macKey := generateKeys(d, algs, kex)
|
||||
|
||||
if algs.Cipher == gcmCipherID {
|
||||
return newGCMCipher(iv, key, macKey)
|
||||
return newGCMCipher(iv, key)
|
||||
}
|
||||
|
||||
if algs.Cipher == aes128cbcID {
|
||||
|
@ -226,6 +267,7 @@ func newPacketCipher(d direction, algs directionAlgorithms, kex *kexResult) (pac
|
|||
|
||||
c := &streamPacketCipher{
|
||||
mac: macModes[algs.MAC].new(macKey),
|
||||
etm: macModes[algs.MAC].etm,
|
||||
}
|
||||
c.macResult = make([]byte, c.mac.Size())
|
||||
|
||||
|
|
|
@ -3,10 +3,10 @@
|
|||
"ignore": "test appengine",
|
||||
"package": [
|
||||
{
|
||||
"checksumSHA1": "JN/re4+x/hCzMLGHmieUcykVDAg=",
|
||||
"checksumSHA1": "vAVjAz7Wpjnu7GGba4JLIDTpQEw=",
|
||||
"path": "code.gitea.io/git",
|
||||
"revision": "d47b98c44c9a6472e44ab80efe65235e11c6da2a",
|
||||
"revisionTime": "2017-10-23T00:52:09Z"
|
||||
"revision": "f9dd6826bbb51c92c6964ce18176c304ea286e54",
|
||||
"revisionTime": "2017-11-28T15:25:05Z"
|
||||
},
|
||||
{
|
||||
"checksumSHA1": "OICEgmUefW4L4l/FK/NVFnl/aOM=",
|
||||
|
@ -1279,40 +1279,40 @@
|
|||
"revisionTime": "2016-09-14T08:04:27Z"
|
||||
},
|
||||
{
|
||||
"checksumSHA1": "dwOedwBJ1EIK9+S3t108Bx054Y8=",
|
||||
"checksumSHA1": "IQkUIOnvlf0tYloFx9mLaXSvXWQ=",
|
||||
"path": "golang.org/x/crypto/curve25519",
|
||||
"revision": "9477e0b78b9ac3d0b03822fd95422e2fe07627cd",
|
||||
"revisionTime": "2016-10-31T15:37:30Z"
|
||||
"revision": "9f005a07e0d31d45e6656d241bb5c0f2efd4bc94",
|
||||
"revisionTime": "2017-09-21T17:41:56Z"
|
||||
},
|
||||
{
|
||||
"checksumSHA1": "wGb//LjBPNxYHqk+dcLo7BjPXK8=",
|
||||
"checksumSHA1": "1hwn8cgg4EVXhCpJIqmMbzqnUo0=",
|
||||
"path": "golang.org/x/crypto/ed25519",
|
||||
"revision": "9477e0b78b9ac3d0b03822fd95422e2fe07627cd",
|
||||
"revisionTime": "2016-10-31T15:37:30Z"
|
||||
"revision": "9f005a07e0d31d45e6656d241bb5c0f2efd4bc94",
|
||||
"revisionTime": "2017-09-21T17:41:56Z"
|
||||
},
|
||||
{
|
||||
"checksumSHA1": "LXFcVx8I587SnWmKycSDEq9yvK8=",
|
||||
"path": "golang.org/x/crypto/ed25519/internal/edwards25519",
|
||||
"revision": "9477e0b78b9ac3d0b03822fd95422e2fe07627cd",
|
||||
"revisionTime": "2016-10-31T15:37:30Z"
|
||||
"revision": "9f005a07e0d31d45e6656d241bb5c0f2efd4bc94",
|
||||
"revisionTime": "2017-09-21T17:41:56Z"
|
||||
},
|
||||
{
|
||||
"checksumSHA1": "MCeXr2RNeiG1XG6V+er1OR0qyeo=",
|
||||
"path": "golang.org/x/crypto/md4",
|
||||
"revision": "ede567c8e044a5913dad1d1af3696d9da953104c",
|
||||
"revisionTime": "2016-11-04T19:41:44Z"
|
||||
"revision": "9f005a07e0d31d45e6656d241bb5c0f2efd4bc94",
|
||||
"revisionTime": "2017-09-21T17:41:56Z"
|
||||
},
|
||||
{
|
||||
"checksumSHA1": "1MGpGDQqnUoRpv7VEcQrXOBydXE=",
|
||||
"path": "golang.org/x/crypto/pbkdf2",
|
||||
"revision": "8e06e8ddd9629eb88639aba897641bff8031f1d3",
|
||||
"revisionTime": "2016-09-10T18:59:01Z"
|
||||
"revision": "9f005a07e0d31d45e6656d241bb5c0f2efd4bc94",
|
||||
"revisionTime": "2017-09-21T17:41:56Z"
|
||||
},
|
||||
{
|
||||
"checksumSHA1": "LlElMHeTC34ng8eHzjvtUhAgrr8=",
|
||||
"checksumSHA1": "YXeyyvak2xbvsqj5MBHMzyG+22M=",
|
||||
"path": "golang.org/x/crypto/ssh",
|
||||
"revision": "9477e0b78b9ac3d0b03822fd95422e2fe07627cd",
|
||||
"revisionTime": "2016-10-31T15:37:30Z"
|
||||
"revision": "9f005a07e0d31d45e6656d241bb5c0f2efd4bc94",
|
||||
"revisionTime": "2017-09-21T17:41:56Z"
|
||||
},
|
||||
{
|
||||
"checksumSHA1": "9jjO5GjLa0XF/nfWihF02RoH4qc=",
|
||||
|
|
Loading…
Reference in New Issue