--- /dev/null
+Git v2.4.3 Release Notes
+========================
+
+Fixes since v2.4.3
+------------------
+
+ * Error messages from "git branch" called remote-tracking branches as
+ "remote branches".
+
+ * "git rerere forget" in a repository without rerere enabled gave a
+ cryptic error message; it should be a silent no-op instead.
+
+ * "git pull --log" and "git pull --no-log" worked as expected, but
+ "git pull --log=20" did not.
+
+ * The pull.ff configuration was supposed to override the merge.ff
+ configuration, but it didn't.
+
+ * The code to read pack-bitmap wanted to allocate a few hundred
+ pointers to a structure, but by mistake allocated and leaked memory
+ enough to hold that many actual structures. Correct the allocation
+ size and also have it on stack, as it is small enough.
+
+ * Various documentation mark-up fixes to make the output more
+ consistent in general and also make AsciiDoctor (an alternative
+ formatter) happier.
+
+ * "git bundle verify" did not diagnose extra parameters on the
+ command line.
+
+ * Multi-ref transaction support we merged a few releases ago
+ unnecessarily kept many file descriptors open, risking to fail with
+ resource exhaustion.
+
+ * The ref API did not handle cases where 'refs/heads/xyzzy/frotz' is
+ removed at the same time as 'refs/heads/xyzzy' is added (or vice
+ versa) very well.
+
+ * The "log --decorate" enhancement in Git 2.4 that shows the commit
+ at the tip of the current branch e.g. "HEAD -> master", did not
+ work with --decorate=full.
+
+ * There was a commented-out (instead of being marked to expect
+ failure) test that documented a breakage that was fixed since the
+ test was written; turn it into a proper test.
+
+ * core.excludesfile (defaulting to $XDG_HOME/git/ignore) is supposed
+ to be overridden by repository-specific .git/info/exclude file, but
+ the order was swapped from the beginning. This belatedly fixes it.
+
+ * The connection initiation code for "ssh" transport tried to absorb
+ differences between the stock "ssh" and Putty-supplied "plink" and
+ its derivatives, but the logic to tell that we are using "plink"
+ variants were too loose and falsely triggered when "plink" appeared
+ anywhere in the path (e.g. "/home/me/bin/uplink/ssh").
+
+ * "git rebase -i" moved the "current" command from "todo" to "done" a
+ bit too prematurely, losing a step when a "pick" did not even start.
+
+ * "git add -e" did not allow the user to abort the operation by
+ killing the editor.
+
+ * Git 2.4 broke setting verbosity and progress levels on "git clone"
+ with native transports.
+
+ * Some time ago, "git blame" (incorrectly) lost the convert_to_git()
+ call when synthesizing a fake "tip" commit that represents the
+ state in the working tree, which broke folks who record the history
+ with LF line ending to make their project portabile across
+ platforms while terminating lines in their working tree files with
+ CRLF for their platform.
+
+ * Code clean-up for xdg configuration path support.
+
+Also contains typofixes, documentation updates and trivial code
+clean-ups.
a case (equivalent to giving the `--no-ff` option from the command
line). When set to `only`, only such fast-forward merges are
allowed (equivalent to giving the `--ff-only` option from the
- command line).
+ command line). This setting overrides `merge.ff` when pulling.
pull.rebase::
When true, rebase branches on top of the fetched branch, instead
remote.<name>.receivepack::
The default program to execute on the remote side when pushing. See
- option \--receive-pack of linkgit:git-push[1].
+ option --receive-pack of linkgit:git-push[1].
remote.<name>.uploadpack::
The default program to execute on the remote side when fetching. See
- option \--upload-pack of linkgit:git-fetch-pack[1].
+ option --upload-pack of linkgit:git-fetch-pack[1].
remote.<name>.tagOpt::
- Setting this value to \--no-tags disables automatic tag following when
- fetching from remote <name>. Setting it to \--tags will fetch every
+ Setting this value to --no-tags disables automatic tag following when
+ fetching from remote <name>. Setting it to --tags will fetch every
tag from remote <name>, even if they are not reachable from remote
branch heads. Passing these flags directly to linkgit:git-fetch[1] can
- override this setting. See options \--tags and \--no-tags of
+ override this setting. See options --tags and --no-tags of
linkgit:git-fetch[1].
remote.<name>.vcs::
Any diff-generating command can take the `-c` or `--cc` option to
produce a 'combined diff' when showing a merge. This is the default
format when showing merges with linkgit:git-diff[1] or
-linkgit:git-show[1]. Note also that you can give the `-m' option to any
+linkgit:git-show[1]. Note also that you can give the `-m` option to any
of these commands to force generation of diffs with individual parents
of a merge.
-u::
--patch::
Generate patch (see section on generating patches).
- {git-diff? This is the default.}
+ifdef::git-diff[]
+ This is the default.
+endif::git-diff[]
endif::git-format-patch[]
-s::
ifndef::git-format-patch[]
--raw::
Generate the raw format.
- {git-diff-core? This is the default.}
+ifdef::git-diff-core[]
+ This is the default.
+endif::git-diff-core[]
endif::git-format-patch[]
ifndef::git-format-patch[]
initial command menu and directly jumps to the `patch` subcommand.
See ``Interactive mode'' for details.
--e, \--edit::
+-e::
+--edit::
Open the diff vs. the index in an editor and let the user
edit it. After the editor was closed, adjust the hunk headers
and apply the patch to the index.
+
--
strip::
- Strip leading and trailing empty lines, trailing whitespace, and
- #commentary and collapse consecutive empty lines.
+ Strip leading and trailing empty lines, trailing whitespace,
+ commentary and collapse consecutive empty lines.
whitespace::
Same as `strip` except #commentary is not removed.
verbatim::
--verbose::
Show unified diff between the HEAD commit and what
would be committed at the bottom of the commit message
- template. Note that this diff output doesn't have its
- lines prefixed with '#'.
+ template to help the user describe the commit by reminding
+ what changes the commit has.
+ Note that this diff output doesn't have its
+ lines prefixed with '#'. This diff will not be a part
+ of the commit message.
+
If specified twice, show in addition the unified diff between
what would be committed and the worktree files, i.e. the unstaged
--file=<path>::
- Use `<path>` to store credentials. The file will have its
+ Use `<path>` to lookup and store credentials. The file will have its
filesystem permissions set to prevent other users on the system
from reading it, but will not be encrypted or otherwise
- protected. Defaults to `~/.git-credentials`.
+ protected. If not specified, credentials will be searched for from
+ `~/.git-credentials` and `$XDG_CONFIG_HOME/git/credentials`, and
+ credentials will be written to `~/.git-credentials` if it exists, or
+ `$XDG_CONFIG_HOME/git/credentials` if it exists and the former does
+ not. See also <<FILES>>.
+
+[[FILES]]
+FILES
+-----
+
+If not set explicitly with '--file', there are two files where
+git-credential-store will search for credentials in order of precedence:
+
+~/.git-credentials::
+ User-specific credentials file.
+
+$XDG_CONFIG_HOME/git/credentials::
+ Second user-specific credentials file. If '$XDG_CONFIG_HOME' is not set
+ or empty, `$HOME/.config/git/credentials` will be used. Any credentials
+ stored in this file will not be used if `~/.git-credentials` has a
+ matching credential as well. It is a good idea not to create this file
+ if you sometimes use older versions of Git that do not support it.
+
+For credential lookups, the files are read in the order given above, with the
+first matching credential found taking precedence over credentials found in
+files further down the list.
+
+Credential storage will by default write to the first existing file in the
+list. If none of these files exist, `~/.git-credentials` will be created and
+written to.
+
+When erasing credentials, matching credentials will be erased from all files.
EXAMPLES
--------
have been completed, or to save the marks table across
incremental runs. As <file> is only opened and truncated
at completion, the same path can also be safely given to
- \--import-marks.
+ --import-marks.
The file will not be written if no new object has been
marked/exported.
--import-marks=<file>::
Before processing any input, load the marks specified in
<file>. The input file must exist, must be readable, and
- must use the same format as produced by \--export-marks.
+ must use the same format as produced by --export-marks.
+
Any commits that have already been marked will not be exported again.
-If the backend uses a similar \--import-marks file, this allows for
+If the backend uses a similar --import-marks file, this allows for
incremental bidirectional exporting of the repository by keeping the
marks the same across runs.
--quiet::
Disable all non-fatal output, making fast-import silent when it
is successful. This option disables the output shown by
- \--stats.
+ --stats.
--stats::
Display some basic statistics about the objects fast-import has
created, the packfiles they were stored into, and the
memory used by fast-import during this run. Showing this output
- is currently the default, but can be disabled with \--quiet.
+ is currently the default, but can be disabled with --quiet.
Options for Frontends
~~~~~~~~~~~~~~~~~~~~~
have been completed, or to save the marks table across
incremental runs. As <file> is only opened and truncated
at checkpoint (or completion) the same path can also be
- safely given to \--import-marks.
+ safely given to --import-marks.
--import-marks=<file>::
Before processing any input, load the marks specified in
<file>. The input file must exist, must be readable, and
- must use the same format as produced by \--export-marks.
+ must use the same format as produced by --export-marks.
Multiple options may be supplied to import more than one
set of marks. If a mark is defined to different values,
the last file wins.
prints a warning message. fast-import will always attempt to update all
branch refs, and does not stop on the first failure.
-Branch updates can be forced with \--force, but it's recommended that
-this only be used on an otherwise quiet repository. Using \--force
+Branch updates can be forced with --force, but it's recommended that
+this only be used on an otherwise quiet repository. Using --force
is not necessary for an initial import into an empty repository.
~~~~~~~~~~~~
The following date formats are supported. A frontend should select
the format it will use for this import by passing the format name
-in the \--date-format=<fmt> command-line option.
+in the --date-format=<fmt> command-line option.
`raw`::
This is the Git native format and is `<time> SP <offutc>`.
- It is also fast-import's default format, if \--date-format was
+ It is also fast-import's default format, if --date-format was
not specified.
+
The time of the event is specified by `<time>` as the number of
of bytes, except `LT`, `GT` and `LF`. `<name>` is typically UTF-8 encoded.
The time of the change is specified by `<when>` using the date format
-that was selected by the \--date-format=<fmt> command-line option.
+that was selected by the --date-format=<fmt> command-line option.
See ``Date Formats'' above for the set of supported formats, and
their syntax.
See `filemodify` above for a detailed description of `<path>`.
`filecopy`
-^^^^^^^^^^^^
+^^^^^^^^^^
Recursively copies an existing file or subdirectory to a different
location within the branch. The existing file or directory must
exist. If the destination exists it will be completely replaced
....
Note that fast-import automatically switches packfiles when the current
-packfile reaches \--max-pack-size, or 4 GiB, whichever limit is
+packfile reaches --max-pack-size, or 4 GiB, whichever limit is
smaller. During an automatic packfile switch fast-import does not update
the branch refs, tags or marks.
Use One Mark Per Commit
~~~~~~~~~~~~~~~~~~~~~~~
When doing a repository conversion, use a unique mark per commit
-(`mark :<n>`) and supply the \--export-marks option on the command
+(`mark :<n>`) and supply the --export-marks option on the command
line. fast-import will dump a file which lists every mark and the Git
object SHA-1 that corresponds to it. If the frontend can tie
the marks back to the source repository, it is easy to verify the
However repacking the repository is necessary to improve data
locality and access performance. It can also take hours on extremely
-large projects (especially if -f and a large \--window parameter is
+large projects (especially if -f and a large --window parameter is
used). Since repacking is safe to run alongside readers and writers,
run the repack in the background and let it finish when it finishes.
There is no reason to wait to explore your new Git project!
~~~~~~~~~~~~~~~~~~~~~~~~~
If you are repacking very old imported data (e.g. older than the
last year), consider expending some extra CPU time and supplying
-\--window=50 (or higher) when you run 'git repack'.
+--window=50 (or higher) when you run 'git repack'.
This will take longer, but will also produce a smaller packfile.
You only need to expend the effort once, and everyone using your
project will benefit from the smaller repository.
fast-import automatically moves active branches to inactive status based on
a simple least-recently-used algorithm. The LRU chain is updated on
each `commit` command. The maximum number of active branches can be
-increased or decreased on the command line with \--active-branches=.
+increased or decreased on the command line with --active-branches=.
per active tree
~~~~~~~~~~~~~~~
the things up in .bash_profile).
--exec=<git-upload-pack>::
- Same as \--upload-pack=<git-upload-pack>.
+ Same as --upload-pack=<git-upload-pack>.
--depth=<n>::
Limit fetching to ancestor-chains not longer than n.
EXAMPLES
--------
-All of the following examples map 'http://$hostname/git/foo/bar.git'
-to '/var/www/git/foo/bar.git'.
+All of the following examples map `http://$hostname/git/foo/bar.git`
+to `/var/www/git/foo/bar.git`.
Apache 2.x::
Ensure mod_cgi, mod_alias, and mod_env are enabled, set
--shallow::
Optimize a pack that will be provided to a client with a shallow
- repository. This option, combined with \--thin, can result in a
+ repository. This option, combined with --thin, can result in a
smaller pack at the cost of speed.
--delta-base-offset::
--[no-]verify::
Toggle the pre-push hook (see linkgit:githooks[5]). The
- default is \--verify, giving the hook a chance to prevent the
- push. With \--no-verify, the hook is bypassed completely.
+ default is --verify, giving the hook a chance to prevent the
+ push. With --no-verify, the hook is bypassed completely.
include::urls-remotes.txt[]
If the upstream branch already contains a change you have made (e.g.,
because you mailed a patch which was applied upstream), then that commit
will be skipped. For example, running `git rebase master` on the
-following history (in which A' and A introduce the same set of changes,
+following history (in which `A'` and `A` introduce the same set of changes,
but have different committer information):
------------
SYNOPSIS
--------
[verse]
-'git rev-list' [ \--max-count=<number> ]
- [ \--skip=<number> ]
- [ \--max-age=<timestamp> ]
- [ \--min-age=<timestamp> ]
- [ \--sparse ]
- [ \--merges ]
- [ \--no-merges ]
- [ \--min-parents=<number> ]
- [ \--no-min-parents ]
- [ \--max-parents=<number> ]
- [ \--no-max-parents ]
- [ \--first-parent ]
- [ \--remove-empty ]
- [ \--full-history ]
- [ \--not ]
- [ \--all ]
- [ \--branches[=<pattern>] ]
- [ \--tags[=<pattern>] ]
- [ \--remotes[=<pattern>] ]
- [ \--glob=<glob-pattern> ]
- [ \--ignore-missing ]
- [ \--stdin ]
- [ \--quiet ]
- [ \--topo-order ]
- [ \--parents ]
- [ \--timestamp ]
- [ \--left-right ]
- [ \--left-only ]
- [ \--right-only ]
- [ \--cherry-mark ]
- [ \--cherry-pick ]
- [ \--encoding=<encoding> ]
- [ \--(author|committer|grep)=<pattern> ]
- [ \--regexp-ignore-case | -i ]
- [ \--extended-regexp | -E ]
- [ \--fixed-strings | -F ]
- [ \--date=(local|relative|default|iso|iso-strict|rfc|short) ]
- [ [ \--objects | \--objects-edge | \--objects-edge-aggressive ]
- [ \--unpacked ] ]
- [ \--pretty | \--header ]
- [ \--bisect ]
- [ \--bisect-vars ]
- [ \--bisect-all ]
- [ \--merge ]
- [ \--reverse ]
- [ \--walk-reflogs ]
- [ \--no-walk ] [ \--do-walk ]
- [ \--use-bitmap-index ]
+'git rev-list' [ --max-count=<number> ]
+ [ --skip=<number> ]
+ [ --max-age=<timestamp> ]
+ [ --min-age=<timestamp> ]
+ [ --sparse ]
+ [ --merges ]
+ [ --no-merges ]
+ [ --min-parents=<number> ]
+ [ --no-min-parents ]
+ [ --max-parents=<number> ]
+ [ --no-max-parents ]
+ [ --first-parent ]
+ [ --remove-empty ]
+ [ --full-history ]
+ [ --not ]
+ [ --all ]
+ [ --branches[=<pattern>] ]
+ [ --tags[=<pattern>] ]
+ [ --remotes[=<pattern>] ]
+ [ --glob=<glob-pattern> ]
+ [ --ignore-missing ]
+ [ --stdin ]
+ [ --quiet ]
+ [ --topo-order ]
+ [ --parents ]
+ [ --timestamp ]
+ [ --left-right ]
+ [ --left-only ]
+ [ --right-only ]
+ [ --cherry-mark ]
+ [ --cherry-pick ]
+ [ --encoding=<encoding> ]
+ [ --(author|committer|grep)=<pattern> ]
+ [ --regexp-ignore-case | -i ]
+ [ --extended-regexp | -E ]
+ [ --fixed-strings | -F ]
+ [ --date=(local|relative|default|iso|iso-strict|rfc|short) ]
+ [ [ --objects | --objects-edge | --objects-edge-aggressive ]
+ [ --unpacked ] ]
+ [ --pretty | --header ]
+ [ --bisect ]
+ [ --bisect-vars ]
+ [ --bisect-all ]
+ [ --merge ]
+ [ --reverse ]
+ [ --walk-reflogs ]
+ [ --no-walk ] [ --do-walk ]
+ [ --use-bitmap-index ]
<commit>... [ \-- <paths>... ]
DESCRIPTION
+
If you want to make sure that the output actually names an object in
your object database and/or can be used as a specific type of object
-you require, you can add "\^{type}" peeling operator to the parameter.
+you require, you can add the `^{type}` peeling operator to the parameter.
For example, `git rev-parse "$VAR^{commit}"` will make sure `$VAR`
names an existing object that is a commit-ish (i.e. a commit, or an
annotated tag that points at a commit). To make sure that `$VAR`
form as close to the original input as possible.
--symbolic-full-name::
- This is similar to \--symbolic, but it omits input that
+ This is similar to --symbolic, but it omits input that
are not refs (i.e. branch or tag names; or more
explicitly disambiguating "heads/master" form, when you
want to name the "master" branch when there is an
a directory on the default $PATH.
--exec=<git-receive-pack>::
- Same as \--receive-pack=<git-receive-pack>.
+ Same as --receive-pack=<git-receive-pack>.
--all::
Instead of explicitly specifying which refs to update,
For tags, it shows the tag message and the referenced objects.
For trees, it shows the names (equivalent to 'git ls-tree'
-with \--name-only).
+with --name-only).
For plain blobs, it shows the plain contents.
Given the following noisy input with '$' indicating the end of a line:
---------
+---------
|A brief introduction $
| $
|$
Use 'git stripspace' with no arguments to obtain:
---------
+---------
|A brief introduction$
|$
|A new paragraph$
Use 'git stripspace --strip-comments' to obtain:
---------
+---------
|A brief introduction$
|$
|A new paragraph$
--username=<user>;;
For transports that SVN handles authentication for (http,
https, and plain svn), specify the username. For other
- transports (e.g. svn+ssh://), you must include the username in
- the URL, e.g. svn+ssh://foo@svn.bar.com/project
+ transports (e.g. `svn+ssh://`), you must include the username in
+ the URL, e.g. `svn+ssh://foo@svn.bar.com/project`
--prefix=<prefix>;;
This allows one to specify a prefix which is prepended
to the names of remotes if trunk/branches/tags are
Ask the user to confirm that a patch set should actually be sent to SVN.
For each patch, one may answer "yes" (accept this patch), "no" (discard this
patch), "all" (accept all patches), or "quit".
- +
- 'git svn dcommit' returns immediately if answer is "no" or "quit", without
- committing anything to SVN.
++
+'git svn dcommit' returns immediately if answer is "no" or "quit", without
+committing anything to SVN.
'branch'::
Create a branch in the SVN repository.
CONFIGURATION
-------------
By default, 'git tag' in sign-with-default mode (-s) will use your
-committer identity (of the form "Your Name <\your@email.address>") to
+committer identity (of the form `Your Name <your@email.address>`) to
find a key. If you want to use a different default key, you can specify
it in the repository configuration as follows:
SYNOPSIS
--------
[verse]
-'git unpack-objects' [-n] [-q] [-r] [--strict] < <pack-file>
+'git unpack-objects' [-n] [-q] [-r] [--strict] < <packfile>
DESCRIPTION
"loose" (one object per file) format.
Objects that already exist in the repository will *not* be unpacked
-from the pack-file. Therefore, nothing will be unpacked if you use
-this command on a pack-file that exists within the target repository.
+from the packfile. Therefore, nothing will be unpacked if you use
+this command on a packfile that exists within the target repository.
See linkgit:git-repack[1] for options to generate
new packs and replace existing ones.
-------------
When specifying the -v option the format used is:
- SHA-1 type size size-in-pack-file offset-in-packfile
+ SHA-1 type size size-in-packfile offset-in-packfile
for objects that are not deltified in the pack, and
branch of the `git.git` repository.
Documentation for older releases are available here:
-* link:v2.4.2/git.html[documentation for release 2.4.2]
+* link:v2.4.3/git.html[documentation for release 2.4.3]
* release notes for
+ link:RelNotes/2.4.3.txt[2.4.3],
link:RelNotes/2.4.2.txt[2.4.2],
link:RelNotes/2.4.1.txt[2.4.1],
link:RelNotes/2.4.0.txt[2.4].
@@ -1 +1,2 @@
Hello World
+It's a new day for git
-----
+------------
i.e. the diff of the change we caused by adding another line to `hello`.
files:
- 'git diff-index' compares contents of a "tree" object and the
- working directory (when '\--cached' flag is not used) or a
- "tree" object and the index file (when '\--cached' flag is
+ working directory (when '--cached' flag is not used) or a
+ "tree" object and the index file (when '--cached' flag is
used);
- 'git diff-files' compares contents of the index file and the
When the "-C" option is used, the original contents of modified files,
and deleted files (and also unmodified files, if the
-"\--find-copies-harder" option is used) are considered as candidates
+"--find-copies-harder" option is used) are considered as candidates
of the source files in rename/copy operation. If the input were like
these filepairs, that talk about a modified file fileY and a newly
created file file0:
of <n> correspond to the number of -v flags passed on the
command line.
-'option progress' \{'true'|'false'\}::
+'option progress' {'true'|'false'}::
Enables (or disables) progress messages displayed by the
transport helper during a command.
'option depth' <depth>::
Deepens the history of a shallow repository.
-'option followtags' \{'true'|'false'\}::
+'option followtags' {'true'|'false'}::
If enabled the helper should automatically fetch annotated
tag objects if the object the tag points at was transferred
during the fetch command. If the tag is not fetched by
ask for the tag specifically. Some helpers may be able to
use this option to avoid a second network connection.
-'option dry-run' \{'true'|'false'\}:
+'option dry-run' {'true'|'false'}:
If true, pretend the operation completed successfully,
but don't actually change any repository data. For most
helpers this only applies to the 'push', if supported.
must not rely on this option being set before
connect request occurs.
-'option check-connectivity' \{'true'|'false'\}::
+'option check-connectivity' {'true'|'false'}::
Request the helper to check connectivity of a clone.
-'option force' \{'true'|'false'\}::
+'option force' {'true'|'false'}::
Request the helper to perform a force update. Defaults to
'false'.
-'option cloning \{'true'|'false'\}::
+'option cloning {'true'|'false'}::
Notify the helper this is a clone request (i.e. the current
repository is guaranteed empty).
-'option update-shallow \{'true'|'false'\}::
+'option update-shallow {'true'|'false'}::
Allow to extend .git/shallow if the new refs require it.
SEE ALSO
references.
----
- update-request = *shallow ( command-list | push-cert ) [pack-file]
+ update-request = *shallow ( command-list | push-cert ) [packfile]
shallow = PKT-LINE("shallow" SP obj-id LF)
*PKT-LINE(gpg-signature-lines LF)
PKT-LINE("push-cert-end" LF)
- pack-file = "PACK" 28*(OCTET)
+ packfile = "PACK" 28*(OCTET)
----
If the receiving end does not support delete-refs, the sending end MUST
sent, command-list MUST NOT be sent; the commands recorded in the
push certificate is used instead.
-The pack-file MUST NOT be sent if the only command used is 'delete'.
+The packfile MUST NOT be sent if the only command used is 'delete'.
-A pack-file MUST be sent if either create or update command is used,
+A packfile MUST be sent if either create or update command is used,
even if the server already has all the necessary objects. In this
-case the client MUST send an empty pack-file. The only time this
+case the client MUST send an empty packfile. The only time this
is likely to happen is if the client is creating
a new branch or a tag that points to an existing obj-id.
#!/bin/sh
GVF=GIT-VERSION-FILE
-DEF_VER=v2.4.2
+DEF_VER=v2.4.3
LF='
'
@echo PYTHON_PATH=\''$(subst ','\'',$(PYTHON_PATH_SQ))'\' >>$@
@echo TAR=\''$(subst ','\'',$(subst ','\'',$(TAR)))'\' >>$@
@echo NO_CURL=\''$(subst ','\'',$(subst ','\'',$(NO_CURL)))'\' >>$@
+ @echo NO_EXPAT=\''$(subst ','\'',$(subst ','\'',$(NO_EXPAT)))'\' >>$@
@echo USE_LIBPCRE=\''$(subst ','\'',$(subst ','\'',$(USE_LIBPCRE)))'\' >>$@
@echo NO_PERL=\''$(subst ','\'',$(subst ','\'',$(NO_PERL)))'\' >>$@
@echo NO_PYTHON=\''$(subst ','\'',$(subst ','\'',$(NO_PYTHON)))'\' >>$@
-Documentation/RelNotes/2.4.2.txt
\ No newline at end of file
+Documentation/RelNotes/2.4.3.txt
\ No newline at end of file
static void bootstrap_attr_stack(void)
{
struct attr_stack *elem;
- char *xdg_attributes_file;
if (attr_stack)
return;
}
}
- if (!git_attributes_file) {
- home_config_paths(NULL, &xdg_attributes_file, "attributes");
- git_attributes_file = xdg_attributes_file;
- }
+ if (!git_attributes_file)
+ git_attributes_file = xdg_config_home("attributes");
if (git_attributes_file) {
elem = read_attr_from_file(git_attributes_file, 1);
if (elem) {
if (run_diff_files(&rev, 0))
die(_("Could not write patch"));
- launch_editor(file, NULL, NULL);
+ if (launch_editor(file, NULL, NULL))
+ die(_("editing patch failed"));
if (stat(file, &st))
die_errno(_("Could not stat '%s'"), file);
if (strbuf_read(&buf, 0, 0) < 0)
die_errno("failed to read from stdin");
}
+ convert_to_git(path, buf.buf, buf.len, &buf, 0);
origin->file.ptr = buf.buf;
origin->file.size = buf.len;
pretend_sha1_file(buf.buf, buf.len, OBJ_BLOB, origin->blob_sha1);
sha1, &flags);
if (!target) {
error(remote_branch
- ? _("remote branch '%s' not found.")
+ ? _("remote-tracking branch '%s' not found.")
: _("branch '%s' not found."), bname.buf);
ret = 1;
continue;
if (delete_ref(name, sha1, REF_NODEREF)) {
error(remote_branch
- ? _("Error deleting remote branch '%s'")
+ ? _("Error deleting remote-tracking branch '%s'")
: _("Error deleting branch '%s'"),
bname.buf);
ret = 1;
}
if (!quiet) {
printf(remote_branch
- ? _("Deleted remote branch %s (was %s).\n")
+ ? _("Deleted remote-tracking branch %s (was %s).\n")
: _("Deleted branch %s (was %s).\n"),
bname.buf,
(flags & REF_ISBROKEN) ? "broken"
if (!strcmp(cmd, "verify")) {
close(bundle_fd);
+ if (argc != 1) {
+ usage(builtin_bundle_usage);
+ return 1;
+ }
if (verify_bundle(&header, 1))
return 1;
fprintf(stderr, _("%s is okay\n"), bundle_file);
return !!list_bundle_refs(&header, argc, argv);
}
if (!strcmp(cmd, "create")) {
+ if (argc < 2) {
+ usage(builtin_bundle_usage);
+ return 1;
+ }
if (!startup_info->have_repository)
die(_("Need a repository to create a bundle."));
return !!create_bundle(&header, bundle_file, argc, argv);
remote = remote_get(option_origin);
transport = transport_get(remote, remote->url[0]);
+ transport_set_verbosity(transport, option_verbosity, option_progress);
+
path = get_repo_path(remote->url[0], &is_bundle);
is_local = option_local != 0 && path && !is_bundle;
if (is_local) {
if (option_single_branch)
transport_set_option(transport, TRANS_OPT_FOLLOWTAGS, "1");
- transport_set_verbosity(transport, option_verbosity, option_progress);
-
if (option_upload_pack)
transport_set_option(transport, TRANS_OPT_UPLOADPACK,
option_upload_pack);
static const char *implicit_ident_advice(void)
{
- char *user_config = NULL;
- char *xdg_config = NULL;
- int config_exists;
+ char *user_config = expand_user_path("~/.gitconfig");
+ char *xdg_config = xdg_config_home("config");
+ int config_exists = file_exists(user_config) || file_exists(xdg_config);
- home_config_paths(&user_config, &xdg_config, "config");
- config_exists = file_exists(user_config) || file_exists(xdg_config);
free(user_config);
free(xdg_config);
}
if (use_global_config) {
- char *user_config = NULL;
- char *xdg_config = NULL;
-
- home_config_paths(&user_config, &xdg_config, "config");
+ char *user_config = expand_user_path("~/.gitconfig");
+ char *xdg_config = xdg_config_home("config");
if (!user_config)
/*
#define util_as_integral(elem) ((intptr_t)((elem)->util))
-static void record_person(int which, struct string_list *people,
- struct commit *commit)
+static void record_person_from_buf(int which, struct string_list *people,
+ const char *buffer)
{
- const char *buffer;
char *name_buf, *name, *name_end;
struct string_list_item *elem;
const char *field;
field = (which == 'a') ? "\nauthor " : "\ncommitter ";
- buffer = get_commit_buffer(commit, NULL);
name = strstr(buffer, field);
if (!name)
return;
if (name_end < name)
return;
name_buf = xmemdupz(name, name_end - name + 1);
- unuse_commit_buffer(commit, buffer);
elem = string_list_lookup(people, name_buf);
if (!elem) {
free(name_buf);
}
+
+static void record_person(int which, struct string_list *people,
+ struct commit *commit)
+{
+ const char *buffer = get_commit_buffer(commit, NULL);
+ record_person_from_buf(which, people, buffer);
+ unuse_commit_buffer(commit, buffer);
+}
+
static int cmp_string_list_util_as_integral(const void *a_, const void *b_)
{
const struct string_list_item *a = a_, *b = b_;
off_t offset = find_pack_entry_one(sha1, p);
if (offset) {
if (!*found_pack) {
- if (!is_pack_valid(p)) {
- warning("packfile %s cannot be accessed", p->pack_name);
+ if (!is_pack_valid(p))
continue;
- }
*found_offset = offset;
*found_pack = p;
}
enum scld_error safe_create_leading_directories_const(const char *path);
int mkdir_in_gitdir(const char *path);
-extern void home_config_paths(char **global, char **xdg, char *file);
extern char *expand_user_path(const char *path);
const char *enter_repo(const char *path, int strict);
static inline int is_absolute_path(const char *path)
int daemon_avoid_alias(const char *path);
extern int is_ntfs_dotgit(const char *name);
+/**
+ * Return a newly allocated string with the evaluation of
+ * "$XDG_CONFIG_HOME/git/$filename" if $XDG_CONFIG_HOME is non-empty, otherwise
+ * "$HOME/.config/git/$filename". Return NULL upon error.
+ */
+extern char *xdg_config_home(const char *filename);
+
/* object replacement */
#define LOOKUP_REPLACE_OBJECT 1
extern void *read_sha1_file_extended(const unsigned char *sha1, enum object_type *type, unsigned long *size, unsigned flag);
int git_config_early(config_fn_t fn, void *data, const char *repo_config)
{
int ret = 0, found = 0;
- char *xdg_config = NULL;
- char *user_config = NULL;
-
- home_config_paths(&user_config, &xdg_config, "config");
+ char *xdg_config = xdg_config_home("config");
+ char *user_config = expand_user_path("~/.gitconfig");
if (git_config_system() && !access_or_die(git_etc_gitconfig(), R_OK, 0)) {
ret += git_config_from_file(fn, git_etc_gitconfig(),
conn->in = conn->out = -1;
if (protocol == PROTO_SSH) {
const char *ssh;
- int putty;
+ int putty, tortoiseplink = 0;
char *ssh_host = hostandport;
const char *port = NULL;
get_host_and_port(&ssh_host, &port);
free(path);
free(conn);
return NULL;
+ }
+
+ ssh = getenv("GIT_SSH_COMMAND");
+ if (ssh) {
+ conn->use_shell = 1;
+ putty = 0;
} else {
- ssh = getenv("GIT_SSH_COMMAND");
- if (ssh) {
- conn->use_shell = 1;
- putty = 0;
- } else {
- ssh = getenv("GIT_SSH");
- if (!ssh)
- ssh = "ssh";
- putty = !!strcasestr(ssh, "plink");
- }
-
- argv_array_push(&conn->args, ssh);
- if (putty && !strcasestr(ssh, "tortoiseplink"))
- argv_array_push(&conn->args, "-batch");
- if (port) {
- /* P is for PuTTY, p is for OpenSSH */
- argv_array_push(&conn->args, putty ? "-P" : "-p");
- argv_array_push(&conn->args, port);
- }
- argv_array_push(&conn->args, ssh_host);
+ const char *base;
+ char *ssh_dup;
+
+ ssh = getenv("GIT_SSH");
+ if (!ssh)
+ ssh = "ssh";
+
+ ssh_dup = xstrdup(ssh);
+ base = basename(ssh_dup);
+
+ tortoiseplink = !strcasecmp(base, "tortoiseplink") ||
+ !strcasecmp(base, "tortoiseplink.exe");
+ putty = !strcasecmp(base, "plink") ||
+ !strcasecmp(base, "plink.exe") || tortoiseplink;
+
+ free(ssh_dup);
+ }
+
+ argv_array_push(&conn->args, ssh);
+ if (tortoiseplink)
+ argv_array_push(&conn->args, "-batch");
+ if (port) {
+ /* P is for PuTTY, p is for OpenSSH */
+ argv_array_push(&conn->args, putty ? "-P" : "-p");
+ argv_array_push(&conn->args, port);
}
+ argv_array_push(&conn->args, ssh_host);
} else {
/* remove repo-local variables from the environment */
conn->env = local_repo_env;
static struct lock_file credential_lock;
-static void parse_credential_file(const char *fn,
+static int parse_credential_file(const char *fn,
struct credential *c,
void (*match_cb)(struct credential *),
void (*other_cb)(struct strbuf *))
FILE *fh;
struct strbuf line = STRBUF_INIT;
struct credential entry = CREDENTIAL_INIT;
+ int found_credential = 0;
fh = fopen(fn, "r");
if (!fh) {
- if (errno != ENOENT)
+ if (errno != ENOENT && errno != EACCES)
die_errno("unable to open %s", fn);
- return;
+ return found_credential;
}
while (strbuf_getline(&line, fh, '\n') != EOF) {
credential_from_url(&entry, line.buf);
if (entry.username && entry.password &&
credential_match(c, &entry)) {
+ found_credential = 1;
if (match_cb) {
match_cb(&entry);
break;
credential_clear(&entry);
strbuf_release(&line);
fclose(fh);
+ return found_credential;
}
static void print_entry(struct credential *c)
die_errno("unable to commit credential store");
}
-static void store_credential(const char *fn, struct credential *c)
+static void store_credential_file(const char *fn, struct credential *c)
{
struct strbuf buf = STRBUF_INIT;
- /*
- * Sanity check that what we are storing is actually sensible.
- * In particular, we can't make a URL without a protocol field.
- * Without either a host or pathname (depending on the scheme),
- * we have no primary key. And without a username and password,
- * we are not actually storing a credential.
- */
- if (!c->protocol || !(c->host || c->path) ||
- !c->username || !c->password)
- return;
-
strbuf_addf(&buf, "%s://", c->protocol);
strbuf_addstr_urlencode(&buf, c->username, 1);
strbuf_addch(&buf, ':');
strbuf_release(&buf);
}
-static void remove_credential(const char *fn, struct credential *c)
+static void store_credential(const struct string_list *fns, struct credential *c)
+{
+ struct string_list_item *fn;
+
+ /*
+ * Sanity check that what we are storing is actually sensible.
+ * In particular, we can't make a URL without a protocol field.
+ * Without either a host or pathname (depending on the scheme),
+ * we have no primary key. And without a username and password,
+ * we are not actually storing a credential.
+ */
+ if (!c->protocol || !(c->host || c->path) || !c->username || !c->password)
+ return;
+
+ for_each_string_list_item(fn, fns)
+ if (!access(fn->string, F_OK)) {
+ store_credential_file(fn->string, c);
+ return;
+ }
+ /*
+ * Write credential to the filename specified by fns->items[0], thus
+ * creating it
+ */
+ if (fns->nr)
+ store_credential_file(fns->items[0].string, c);
+}
+
+static void remove_credential(const struct string_list *fns, struct credential *c)
{
+ struct string_list_item *fn;
+
/*
* Sanity check that we actually have something to match
* against. The input we get is a restrictive pattern,
* to empty input. So explicitly disallow it, and require that the
* pattern have some actual content to match.
*/
- if (c->protocol || c->host || c->path || c->username)
- rewrite_credential_file(fn, c, NULL);
+ if (!c->protocol && !c->host && !c->path && !c->username)
+ return;
+ for_each_string_list_item(fn, fns)
+ if (!access(fn->string, F_OK))
+ rewrite_credential_file(fn->string, c, NULL);
}
-static int lookup_credential(const char *fn, struct credential *c)
+static void lookup_credential(const struct string_list *fns, struct credential *c)
{
- parse_credential_file(fn, c, print_entry, NULL);
- return c->username && c->password;
+ struct string_list_item *fn;
+
+ for_each_string_list_item(fn, fns)
+ if (parse_credential_file(fn->string, c, print_entry, NULL))
+ return; /* Found credential */
}
int main(int argc, char **argv)
};
const char *op;
struct credential c = CREDENTIAL_INIT;
+ struct string_list fns = STRING_LIST_INIT_DUP;
char *file = NULL;
struct option options[] = {
OPT_STRING(0, "file", &file, "path",
usage_with_options(usage, options);
op = argv[0];
- if (!file)
- file = expand_user_path("~/.git-credentials");
- if (!file)
+ if (file) {
+ string_list_append(&fns, file);
+ } else {
+ if ((file = expand_user_path("~/.git-credentials")))
+ string_list_append_nodup(&fns, file);
+ file = xdg_config_home("credentials");
+ if (file)
+ string_list_append_nodup(&fns, file);
+ }
+ if (!fns.nr)
die("unable to set up default path; use --file");
if (credential_read(&c, stdin) < 0)
die("unable to read credential");
if (!strcmp(op, "get"))
- lookup_credential(file, &c);
+ lookup_credential(&fns, &c);
else if (!strcmp(op, "erase"))
- remove_credential(file, &c);
+ remove_credential(&fns, &c);
else if (!strcmp(op, "store"))
- store_credential(file, &c);
+ store_credential(&fns, &c);
else
; /* Ignore unknown operation. */
+ string_list_clear(&fns, 0);
return 0;
}
void setup_standard_excludes(struct dir_struct *dir)
{
const char *path;
- char *xdg_path;
dir->exclude_per_dir = ".gitignore";
+
+ /* core.excludefile defaulting to $XDG_HOME/git/ignore */
+ if (!excludes_file)
+ excludes_file = xdg_config_home("ignore");
+ if (excludes_file && !access_or_warn(excludes_file, R_OK, 0))
+ add_excludes_from_file(dir, excludes_file);
+
+ /* per repository user preference */
path = git_path("info/exclude");
- if (!excludes_file) {
- home_config_paths(NULL, &xdg_path, "ignore");
- excludes_file = xdg_path;
- }
if (!access_or_warn(path, R_OK, 0))
add_excludes_from_file(dir, path);
- if (excludes_file && !access_or_warn(excludes_file, R_OK, 0))
- add_excludes_from_file(dir, excludes_file);
}
int remove_path(const char *name)
fi
# Setup default fast-forward options via `pull.ff`
-pull_ff=$(git config pull.ff)
+pull_ff=$(bool_or_string_config pull.ff)
case "$pull_ff" in
+true)
+ no_ff=--ff
+ ;;
false)
no_ff=--no-ff
;;
diffstat=--no-stat ;;
--stat|--summary)
diffstat=--stat ;;
- --log|--no-log)
- log_arg=$1 ;;
+ --log|--log=*|--no-log)
+ log_arg="$1" ;;
--no-c|--no-co|--no-com|--no-comm|--no-commi|--no-commit)
no_commit=--no-commit ;;
--c|--co|--com|--comm|--commi|--commit)
fi
}
+# Put the last action marked done at the beginning of the todo list
+# again. If there has not been an action marked done yet, leave the list of
+# items on the todo list unchanged.
+reschedule_last_action () {
+ tail -n 1 "$done" | cat - "$todo" >"$todo".new
+ sed -e \$d <"$done" >"$done".new
+ mv -f "$todo".new "$todo"
+ mv -f "$done".new "$done"
+}
+
append_todo_help () {
git stripspace --comment-lines >>"$todo" <<\EOF
output eval git cherry-pick \
${gpg_sign_opt:+$(git rev-parse --sq-quote "$gpg_sign_opt")} \
"$strategy_args" $empty_args $ff "$@"
+
+ # If cherry-pick dies it leaves the to-be-picked commit unrecorded. Reschedule
+ # previous task so this commit is not lost.
+ ret=$?
+ case "$ret" in [01]) ;; *) reschedule_last_action ;; esac
+ return $ret
}
pick_one_preserving_merges () {
#include "line-log.h"
static struct decoration name_decoration = { "object names" };
+static int decoration_loaded;
+static int decoration_flags;
static char decoration_colors[][COLOR_MAXLEN] = {
GIT_COLOR_RESET,
struct object *obj;
enum decoration_type type = DECORATION_NONE;
+ assert(cb_data == NULL);
+
if (starts_with(refname, "refs/replace/")) {
unsigned char original_sha1[20];
if (!check_replace_refs)
else if (!strcmp(refname, "HEAD"))
type = DECORATION_REF_HEAD;
- if (!cb_data || *(int *)cb_data == DECORATE_SHORT_REFS)
- refname = prettify_refname(refname);
add_name_decoration(type, refname, obj);
while (obj->type == OBJ_TAG) {
obj = ((struct tag *)obj)->tagged;
void load_ref_decorations(int flags)
{
- static int loaded;
- if (!loaded) {
- loaded = 1;
- for_each_ref(add_ref_decoration, &flags);
- head_ref(add_ref_decoration, &flags);
+ if (!decoration_loaded) {
+ decoration_loaded = 1;
+ decoration_flags = flags;
+ for_each_ref(add_ref_decoration, NULL);
+ head_ref(add_ref_decoration, NULL);
for_each_commit_graft(add_graft_decoration, NULL);
}
}
branch_name = resolve_ref_unsafe("HEAD", 0, unused, &rru_flags);
if (!(rru_flags & REF_ISSYMREF))
return NULL;
- if (!skip_prefix(branch_name, "refs/heads/", &branch_name))
+
+ if (!starts_with(branch_name, "refs/"))
return NULL;
/* OK, do we have that ref in the list? */
return NULL;
}
+static void show_name(struct strbuf *sb, const struct name_decoration *decoration)
+{
+ if (decoration_flags == DECORATE_SHORT_REFS)
+ strbuf_addstr(sb, prettify_refname(decoration->name));
+ else
+ strbuf_addstr(sb, decoration->name);
+}
+
/*
* The caller makes sure there is no funny color before calling.
* format_decorations_extended makes sure the same after return.
if (decoration->type == DECORATION_REF_TAG)
strbuf_addstr(sb, "tag: ");
- strbuf_addstr(sb, decoration->name);
+ show_name(sb, decoration);
if (current_and_HEAD &&
decoration->type == DECORATION_REF_HEAD) {
strbuf_addstr(sb, " -> ");
strbuf_addstr(sb, color_reset);
strbuf_addstr(sb, decorate_get_color(use_color, current_and_HEAD->type));
- strbuf_addstr(sb, current_and_HEAD->name);
+ show_name(sb, current_and_HEAD);
}
strbuf_addstr(sb, color_reset);
return buffer[(*pos)++];
}
+#define MAX_XOR_OFFSET 160
+
static int load_bitmap_entries_v1(struct bitmap_index *index)
{
- static const size_t MAX_XOR_OFFSET = 160;
-
uint32_t i;
- struct stored_bitmap **recent_bitmaps;
-
- recent_bitmaps = xcalloc(MAX_XOR_OFFSET, sizeof(struct stored_bitmap));
+ struct stored_bitmap *recent_bitmaps[MAX_XOR_OFFSET] = { NULL };
for (i = 0; i < index->entry_count; ++i) {
int xor_offset, flags;
return ret;
}
-void home_config_paths(char **global, char **xdg, char *file)
-{
- char *xdg_home = getenv("XDG_CONFIG_HOME");
- char *home = getenv("HOME");
- char *to_free = NULL;
-
- if (!home) {
- if (global)
- *global = NULL;
- } else {
- if (!xdg_home) {
- to_free = mkpathdup("%s/.config", home);
- xdg_home = to_free;
- }
- if (global)
- *global = mkpathdup("%s/.gitconfig", home);
- }
-
- if (xdg) {
- if (!xdg_home)
- *xdg = NULL;
- else
- *xdg = mkpathdup("%s/git/%s", xdg_home, file);
- }
-
- free(to_free);
-}
-
char *git_path_submodule(const char *path, const char *fmt, ...)
{
char *pathname = get_pathname();
len = -1;
}
}
+
+char *xdg_config_home(const char *filename)
+{
+ const char *home, *config_home;
+
+ assert(filename);
+ config_home = getenv("XDG_CONFIG_HOME");
+ if (config_home && *config_home)
+ return mkpathdup("%s/git/%s", config_home, filename);
+
+ home = getenv("HOME");
+ if (home)
+ return mkpathdup("%s/.config/git/%s", home, filename);
+ return NULL;
+}
#define REF_HAVE_OLD 0x10
/*
+ * Used as a flag in ref_update::flags when the lockfile needs to be
+ * committed.
+ */
+#define REF_NEEDS_COMMIT 0x20
+
+/*
* Try to read one refname component from the front of refname.
* Return the length of the component found, or -1 if the component is
* not legal. It is legal if it is something reasonable to have under
* presence of an empty subdirectory does not block the creation of a
* similarly-named reference. (The fact that reference names with the
* same leading components can conflict *with each other* is a
- * separate issue that is regulated by is_refname_available().)
+ * separate issue that is regulated by verify_refname_available().)
*
* Please note that the name field contains the fully-qualified
* reference (or subdirectory) name. Space could be saved by only
}
}
-static int entry_matches(struct ref_entry *entry, const struct string_list *list)
-{
- return list && string_list_has_string(list, entry->name);
-}
-
struct nonmatching_ref_data {
const struct string_list *skip;
- struct ref_entry *found;
+ const char *conflicting_refname;
};
static int nonmatching_ref_fn(struct ref_entry *entry, void *vdata)
{
struct nonmatching_ref_data *data = vdata;
- if (entry_matches(entry, data->skip))
+ if (data->skip && string_list_has_string(data->skip, entry->name))
return 0;
- data->found = entry;
+ data->conflicting_refname = entry->name;
return 1;
}
-static void report_refname_conflict(struct ref_entry *entry,
- const char *refname)
-{
- error("'%s' exists; cannot create '%s'", entry->name, refname);
-}
-
/*
- * Return true iff a reference named refname could be created without
- * conflicting with the name of an existing reference in dir. If
- * skip is non-NULL, ignore potential conflicts with refs in skip
- * (e.g., because they are scheduled for deletion in the same
- * operation).
+ * Return 0 if a reference named refname could be created without
+ * conflicting with the name of an existing reference in dir.
+ * Otherwise, return a negative value and write an explanation to err.
+ * If extras is non-NULL, it is a list of additional refnames with
+ * which refname is not allowed to conflict. If skip is non-NULL,
+ * ignore potential conflicts with refs in skip (e.g., because they
+ * are scheduled for deletion in the same operation). Behavior is
+ * undefined if the same name is listed in both extras and skip.
*
* Two reference names conflict if one of them exactly matches the
- * leading components of the other; e.g., "foo/bar" conflicts with
- * both "foo" and with "foo/bar/baz" but not with "foo/bar" or
- * "foo/barbados".
+ * leading components of the other; e.g., "refs/foo/bar" conflicts
+ * with both "refs/foo" and with "refs/foo/bar/baz" but not with
+ * "refs/foo/bar" or "refs/foo/barbados".
*
- * skip must be sorted.
+ * extras and skip must be sorted.
*/
-static int is_refname_available(const char *refname,
- const struct string_list *skip,
- struct ref_dir *dir)
+static int verify_refname_available(const char *refname,
+ const struct string_list *extras,
+ const struct string_list *skip,
+ struct ref_dir *dir,
+ struct strbuf *err)
{
const char *slash;
- size_t len;
int pos;
- char *dirname;
+ struct strbuf dirname = STRBUF_INIT;
+ int ret = -1;
+
+ /*
+ * For the sake of comments in this function, suppose that
+ * refname is "refs/foo/bar".
+ */
+
+ assert(err);
+ strbuf_grow(&dirname, strlen(refname) + 1);
for (slash = strchr(refname, '/'); slash; slash = strchr(slash + 1, '/')) {
+ /* Expand dirname to the new prefix, not including the trailing slash: */
+ strbuf_add(&dirname, refname + dirname.len, slash - refname - dirname.len);
+
/*
- * We are still at a leading dir of the refname; we are
- * looking for a conflict with a leaf entry.
- *
- * If we find one, we still must make sure it is
- * not in "skip".
+ * We are still at a leading dir of the refname (e.g.,
+ * "refs/foo"; if there is a reference with that name,
+ * it is a conflict, *unless* it is in skip.
*/
- pos = search_ref_dir(dir, refname, slash - refname);
- if (pos >= 0) {
- struct ref_entry *entry = dir->entries[pos];
- if (entry_matches(entry, skip))
- return 1;
- report_refname_conflict(entry, refname);
- return 0;
+ if (dir) {
+ pos = search_ref_dir(dir, dirname.buf, dirname.len);
+ if (pos >= 0 &&
+ (!skip || !string_list_has_string(skip, dirname.buf))) {
+ /*
+ * We found a reference whose name is
+ * a proper prefix of refname; e.g.,
+ * "refs/foo", and is not in skip.
+ */
+ strbuf_addf(err, "'%s' exists; cannot create '%s'",
+ dirname.buf, refname);
+ goto cleanup;
+ }
}
+ if (extras && string_list_has_string(extras, dirname.buf) &&
+ (!skip || !string_list_has_string(skip, dirname.buf))) {
+ strbuf_addf(err, "cannot process '%s' and '%s' at the same time",
+ refname, dirname.buf);
+ goto cleanup;
+ }
/*
* Otherwise, we can try to continue our search with
- * the next component; if we come up empty, we know
- * there is nothing under this whole prefix.
+ * the next component. So try to look up the
+ * directory, e.g., "refs/foo/". If we come up empty,
+ * we know there is nothing under this whole prefix,
+ * but even in that case we still have to continue the
+ * search for conflicts with extras.
*/
- pos = search_ref_dir(dir, refname, slash + 1 - refname);
- if (pos < 0)
- return 1;
-
- dir = get_ref_dir(dir->entries[pos]);
+ strbuf_addch(&dirname, '/');
+ if (dir) {
+ pos = search_ref_dir(dir, dirname.buf, dirname.len);
+ if (pos < 0) {
+ /*
+ * There was no directory "refs/foo/",
+ * so there is nothing under this
+ * whole prefix. So there is no need
+ * to continue looking for conflicting
+ * references. But we need to continue
+ * looking for conflicting extras.
+ */
+ dir = NULL;
+ } else {
+ dir = get_ref_dir(dir->entries[pos]);
+ }
+ }
}
/*
- * We are at the leaf of our refname; we want to
- * make sure there are no directories which match it.
+ * We are at the leaf of our refname (e.g., "refs/foo/bar").
+ * There is no point in searching for a reference with that
+ * name, because a refname isn't considered to conflict with
+ * itself. But we still need to check for references whose
+ * names are in the "refs/foo/bar/" namespace, because they
+ * *do* conflict.
*/
- len = strlen(refname);
- dirname = xmallocz(len + 1);
- sprintf(dirname, "%s/", refname);
- pos = search_ref_dir(dir, dirname, len + 1);
- free(dirname);
+ strbuf_addstr(&dirname, refname + dirname.len);
+ strbuf_addch(&dirname, '/');
+
+ if (dir) {
+ pos = search_ref_dir(dir, dirname.buf, dirname.len);
- if (pos >= 0) {
+ if (pos >= 0) {
+ /*
+ * We found a directory named "$refname/"
+ * (e.g., "refs/foo/bar/"). It is a problem
+ * iff it contains any ref that is not in
+ * "skip".
+ */
+ struct nonmatching_ref_data data;
+
+ data.skip = skip;
+ data.conflicting_refname = NULL;
+ dir = get_ref_dir(dir->entries[pos]);
+ sort_ref_dir(dir);
+ if (do_for_each_entry_in_dir(dir, 0, nonmatching_ref_fn, &data)) {
+ strbuf_addf(err, "'%s' exists; cannot create '%s'",
+ data.conflicting_refname, refname);
+ goto cleanup;
+ }
+ }
+ }
+
+ if (extras) {
/*
- * We found a directory named "refname". It is a
- * problem iff it contains any ref that is not
- * in "skip".
+ * Check for entries in extras that start with
+ * "$refname/". We do that by looking for the place
+ * where "$refname/" would be inserted in extras. If
+ * there is an entry at that position that starts with
+ * "$refname/" and is not in skip, then we have a
+ * conflict.
*/
- struct ref_entry *entry = dir->entries[pos];
- struct ref_dir *dir = get_ref_dir(entry);
- struct nonmatching_ref_data data;
+ for (pos = string_list_find_insert_index(extras, dirname.buf, 0);
+ pos < extras->nr; pos++) {
+ const char *extra_refname = extras->items[pos].string;
- data.skip = skip;
- sort_ref_dir(dir);
- if (!do_for_each_entry_in_dir(dir, 0, nonmatching_ref_fn, &data))
- return 1;
+ if (!starts_with(extra_refname, dirname.buf))
+ break;
- report_refname_conflict(data.found, refname);
- return 0;
+ if (!skip || !string_list_has_string(skip, extra_refname)) {
+ strbuf_addf(err, "cannot process '%s' and '%s' at the same time",
+ refname, extra_refname);
+ goto cleanup;
+ }
+ }
}
- /*
- * There is no point in searching for another leaf
- * node which matches it; such an entry would be the
- * ref we are looking for, not a conflict.
- */
- return 1;
+ /* No conflicts were found */
+ ret = 0;
+
+cleanup:
+ strbuf_release(&dirname);
+ return ret;
}
struct packed_ref_cache {
*/
static struct ref_lock *lock_ref_sha1_basic(const char *refname,
const unsigned char *old_sha1,
+ const struct string_list *extras,
const struct string_list *skip,
- unsigned int flags, int *type_p)
+ unsigned int flags, int *type_p,
+ struct strbuf *err)
{
char *ref_file;
const char *orig_refname = refname;
int resolve_flags = 0;
int attempts_remaining = 3;
+ assert(err);
+
lock = xcalloc(1, sizeof(struct ref_lock));
lock->lock_fd = -1;
ref_file = git_path("%s", orig_refname);
if (remove_empty_directories(ref_file)) {
last_errno = errno;
- error("there are still refs under '%s'", orig_refname);
+
+ if (!verify_refname_available(orig_refname, extras, skip,
+ get_loose_refs(&ref_cache), err))
+ strbuf_addf(err, "there are still refs under '%s'",
+ orig_refname);
+
goto error_return;
}
refname = resolve_ref_unsafe(orig_refname, resolve_flags,
*type_p = type;
if (!refname) {
last_errno = errno;
- error("unable to resolve reference %s: %s",
- orig_refname, strerror(errno));
+ if (last_errno != ENOTDIR ||
+ !verify_refname_available(orig_refname, extras, skip,
+ get_loose_refs(&ref_cache), err))
+ strbuf_addf(err, "unable to resolve reference %s: %s",
+ orig_refname, strerror(last_errno));
+
goto error_return;
}
/*
* our refname.
*/
if (is_null_sha1(lock->old_sha1) &&
- !is_refname_available(refname, skip, get_packed_refs(&ref_cache))) {
+ verify_refname_available(refname, extras, skip,
+ get_packed_refs(&ref_cache), err)) {
last_errno = ENOTDIR;
goto error_return;
}
/* fall through */
default:
last_errno = errno;
- error("unable to create directory for %s", ref_file);
+ strbuf_addf(err, "unable to create directory for %s", ref_file);
goto error_return;
}
*/
goto retry;
else {
- struct strbuf err = STRBUF_INIT;
- unable_to_lock_message(ref_file, errno, &err);
- error("%s", err.buf);
- strbuf_release(&err);
+ unable_to_lock_message(ref_file, errno, err);
goto error_return;
}
}
static int rename_ref_available(const char *oldname, const char *newname)
{
struct string_list skip = STRING_LIST_INIT_NODUP;
+ struct strbuf err = STRBUF_INIT;
int ret;
string_list_insert(&skip, oldname);
- ret = is_refname_available(newname, &skip, get_packed_refs(&ref_cache))
- && is_refname_available(newname, &skip, get_loose_refs(&ref_cache));
+ ret = !verify_refname_available(newname, NULL, &skip,
+ get_packed_refs(&ref_cache), &err)
+ && !verify_refname_available(newname, NULL, &skip,
+ get_loose_refs(&ref_cache), &err);
+ if (!ret)
+ error("%s", err.buf);
+
string_list_clear(&skip, 0);
+ strbuf_release(&err);
return ret;
}
-static int write_ref_sha1(struct ref_lock *lock, const unsigned char *sha1,
- const char *logmsg);
+static int write_ref_to_lockfile(struct ref_lock *lock, const unsigned char *sha1);
+static int commit_ref_update(struct ref_lock *lock,
+ const unsigned char *sha1, const char *logmsg);
int rename_ref(const char *oldrefname, const char *newrefname, const char *logmsg)
{
struct stat loginfo;
int log = !lstat(git_path("logs/%s", oldrefname), &loginfo);
const char *symref = NULL;
+ struct strbuf err = STRBUF_INIT;
if (log && S_ISLNK(loginfo.st_mode))
return error("reflog for %s is a symlink", oldrefname);
logmoved = log;
- lock = lock_ref_sha1_basic(newrefname, NULL, NULL, 0, NULL);
+ lock = lock_ref_sha1_basic(newrefname, NULL, NULL, NULL, 0, NULL, &err);
if (!lock) {
- error("unable to lock %s for update", newrefname);
+ error("unable to rename '%s' to '%s': %s", oldrefname, newrefname, err.buf);
+ strbuf_release(&err);
goto rollback;
}
hashcpy(lock->old_sha1, orig_sha1);
- if (write_ref_sha1(lock, orig_sha1, logmsg)) {
+
+ if (write_ref_to_lockfile(lock, orig_sha1) ||
+ commit_ref_update(lock, orig_sha1, logmsg)) {
error("unable to write current sha1 into %s", newrefname);
goto rollback;
}
return 0;
rollback:
- lock = lock_ref_sha1_basic(oldrefname, NULL, NULL, 0, NULL);
+ lock = lock_ref_sha1_basic(oldrefname, NULL, NULL, NULL, 0, NULL, &err);
if (!lock) {
- error("unable to lock %s for rollback", oldrefname);
+ error("unable to lock %s for rollback: %s", oldrefname, err.buf);
+ strbuf_release(&err);
goto rollbacklog;
}
flag = log_all_ref_updates;
log_all_ref_updates = 0;
- if (write_ref_sha1(lock, orig_sha1, NULL))
+ if (write_ref_to_lockfile(lock, orig_sha1) ||
+ commit_ref_update(lock, orig_sha1, NULL))
error("unable to write current sha1 into %s", oldrefname);
log_all_ref_updates = flag;
}
/*
- * Write sha1 into the ref specified by the lock. Make sure that errno
- * is sane on error.
+ * Write sha1 into the open lockfile, then close the lockfile. On
+ * errors, rollback the lockfile and set errno to reflect the problem.
*/
-static int write_ref_sha1(struct ref_lock *lock,
- const unsigned char *sha1, const char *logmsg)
+static int write_ref_to_lockfile(struct ref_lock *lock,
+ const unsigned char *sha1)
{
static char term = '\n';
struct object *o;
errno = save_errno;
return -1;
}
+ return 0;
+}
+
+/*
+ * Commit a change to a loose reference that has already been written
+ * to the loose reference lockfile. Also update the reflogs if
+ * necessary, using the specified lockmsg (which can be NULL).
+ */
+static int commit_ref_update(struct ref_lock *lock,
+ const unsigned char *sha1, const char *logmsg)
+{
clear_loose_ref_cache(&ref_cache);
if (log_ref_write(lock->ref_name, lock->old_sha1, sha1, logmsg) < 0 ||
(strcmp(lock->ref_name, lock->orig_ref_name) &&
return 0;
}
-static int ref_update_compare(const void *r1, const void *r2)
-{
- const struct ref_update * const *u1 = r1;
- const struct ref_update * const *u2 = r2;
- return strcmp((*u1)->refname, (*u2)->refname);
-}
-
-static int ref_update_reject_duplicates(struct ref_update **updates, int n,
+static int ref_update_reject_duplicates(struct string_list *refnames,
struct strbuf *err)
{
- int i;
+ int i, n = refnames->nr;
assert(err);
for (i = 1; i < n; i++)
- if (!strcmp(updates[i - 1]->refname, updates[i]->refname)) {
+ if (!strcmp(refnames->items[i - 1].string, refnames->items[i].string)) {
strbuf_addf(err,
"Multiple updates for ref '%s' not allowed.",
- updates[i]->refname);
+ refnames->items[i].string);
return 1;
}
return 0;
struct ref_update **updates = transaction->updates;
struct string_list refs_to_delete = STRING_LIST_INIT_NODUP;
struct string_list_item *ref_to_delete;
+ struct string_list affected_refnames = STRING_LIST_INIT_NODUP;
assert(err);
return 0;
}
- /* Copy, sort, and reject duplicate refs */
- qsort(updates, n, sizeof(*updates), ref_update_compare);
- if (ref_update_reject_duplicates(updates, n, err)) {
+ /* Fail if a refname appears more than once in the transaction: */
+ for (i = 0; i < n; i++)
+ string_list_append(&affected_refnames, updates[i]->refname);
+ string_list_sort(&affected_refnames);
+ if (ref_update_reject_duplicates(&affected_refnames, err)) {
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
}
- /* Acquire all locks while verifying old values */
+ /*
+ * Acquire all locks, verify old values if provided, check
+ * that new values are valid, and write new values to the
+ * lockfiles, ready to be activated. Only keep one lockfile
+ * open at a time to avoid running out of file descriptors.
+ */
for (i = 0; i < n; i++) {
struct ref_update *update = updates[i];
- unsigned int flags = update->flags;
- if ((flags & REF_HAVE_NEW) && is_null_sha1(update->new_sha1))
- flags |= REF_DELETING;
+ if ((update->flags & REF_HAVE_NEW) &&
+ is_null_sha1(update->new_sha1))
+ update->flags |= REF_DELETING;
update->lock = lock_ref_sha1_basic(
update->refname,
((update->flags & REF_HAVE_OLD) ?
update->old_sha1 : NULL),
- NULL,
- flags,
- &update->type);
+ &affected_refnames, NULL,
+ update->flags,
+ &update->type,
+ err);
if (!update->lock) {
+ char *reason;
+
ret = (errno == ENOTDIR)
? TRANSACTION_NAME_CONFLICT
: TRANSACTION_GENERIC_ERROR;
- strbuf_addf(err, "Cannot lock the ref '%s'.",
- update->refname);
+ reason = strbuf_detach(err, NULL);
+ strbuf_addf(err, "Cannot lock ref '%s': %s",
+ update->refname, reason);
+ free(reason);
goto cleanup;
}
- }
-
- /* Perform updates first so live commits remain referenced */
- for (i = 0; i < n; i++) {
- struct ref_update *update = updates[i];
- int flags = update->flags;
-
- if ((flags & REF_HAVE_NEW) && !is_null_sha1(update->new_sha1)) {
+ if ((update->flags & REF_HAVE_NEW) &&
+ !(update->flags & REF_DELETING)) {
int overwriting_symref = ((update->type & REF_ISSYMREF) &&
(update->flags & REF_NODEREF));
- if (!overwriting_symref
- && !hashcmp(update->lock->old_sha1, update->new_sha1)) {
+ if (!overwriting_symref &&
+ !hashcmp(update->lock->old_sha1, update->new_sha1)) {
/*
* The reference already has the desired
* value, so we don't need to write it.
*/
- unlock_ref(update->lock);
+ } else if (write_ref_to_lockfile(update->lock,
+ update->new_sha1)) {
+ /*
+ * The lock was freed upon failure of
+ * write_ref_to_lockfile():
+ */
update->lock = NULL;
- } else if (write_ref_sha1(update->lock, update->new_sha1,
- update->msg)) {
- update->lock = NULL; /* freed by write_ref_sha1 */
strbuf_addf(err, "Cannot update the ref '%s'.",
update->refname);
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
} else {
- /* freed by write_ref_sha1(): */
+ update->flags |= REF_NEEDS_COMMIT;
+ }
+ }
+ if (!(update->flags & REF_NEEDS_COMMIT)) {
+ /*
+ * We didn't have to write anything to the lockfile.
+ * Close it to free up the file descriptor:
+ */
+ if (close_ref(update->lock)) {
+ strbuf_addf(err, "Couldn't close %s.lock",
+ update->refname);
+ goto cleanup;
+ }
+ }
+ }
+
+ /* Perform updates first so live commits remain referenced */
+ for (i = 0; i < n; i++) {
+ struct ref_update *update = updates[i];
+
+ if (update->flags & REF_NEEDS_COMMIT) {
+ if (commit_ref_update(update->lock,
+ update->new_sha1, update->msg)) {
+ /* freed by commit_ref_update(): */
+ update->lock = NULL;
+ strbuf_addf(err, "Cannot update the ref '%s'.",
+ update->refname);
+ ret = TRANSACTION_GENERIC_ERROR;
+ goto cleanup;
+ } else {
+ /* freed by commit_ref_update(): */
update->lock = NULL;
}
}
/* Perform deletes now that updates are safely completed */
for (i = 0; i < n; i++) {
struct ref_update *update = updates[i];
- int flags = update->flags;
- if ((flags & REF_HAVE_NEW) && is_null_sha1(update->new_sha1)) {
+ if (update->flags & REF_DELETING) {
if (delete_ref_loose(update->lock, update->type, err)) {
ret = TRANSACTION_GENERIC_ERROR;
goto cleanup;
}
- if (!(flags & REF_ISPRUNING))
+ if (!(update->flags & REF_ISPRUNING))
string_list_append(&refs_to_delete,
update->lock->ref_name);
}
if (updates[i]->lock)
unlock_ref(updates[i]->lock);
string_list_clear(&refs_to_delete, 0);
+ string_list_clear(&affected_refnames, 0);
return ret;
}
char *log_file;
int status = 0;
int type;
+ struct strbuf err = STRBUF_INIT;
memset(&cb, 0, sizeof(cb));
cb.flags = flags;
* reference itself, plus we might need to update the
* reference if --updateref was specified:
*/
- lock = lock_ref_sha1_basic(refname, sha1, NULL, 0, &type);
- if (!lock)
- return error("cannot lock ref '%s'", refname);
+ lock = lock_ref_sha1_basic(refname, sha1, NULL, NULL, 0, &type, &err);
+ if (!lock) {
+ error("cannot lock ref '%s': %s", refname, err.buf);
+ strbuf_release(&err);
+ return -1;
+ }
if (!reflog_exists(refname)) {
unlock_ref(lock);
return 0;
return error("Could not read index");
fd = setup_rerere(&merge_rr, RERERE_NOAUTOUPDATE);
+ if (fd < 0)
+ return 0;
unmerge_cache(pathspec);
find_conflict(&conflict);
* answer, as it may have been deleted since the index was
* loaded!
*/
- if (!is_pack_valid(p)) {
- warning("packfile %s cannot be accessed", p->pack_name);
+ if (!is_pack_valid(p))
return 0;
- }
e->offset = offset;
e->p = p;
hashcpy(e->sha1, sha1);
# Copyright (c) 2008 Clemens Buchacher <drizzd@aon.at>
#
+if test -n "$NO_CURL"
+then
+ skip_all='skipping test, git built without http support'
+ test_done
+fi
+
+if test -n "$NO_EXPAT" && test -n "$LIB_HTTPD_DAV"
+then
+ skip_all='skipping test, git built without expat support'
+ test_done
+fi
+
test_tristate GIT_TEST_HTTPD
if test "$GIT_TEST_HTTPD" = false
then
test_cmp err.expect err
'
+test_expect_success 'info/exclude trumps core.excludesfile' '
+ echo >>global-excludes usually-ignored &&
+ echo >>.git/info/exclude "!usually-ignored" &&
+ >usually-ignored &&
+ echo "?? usually-ignored" >expect &&
+
+ git status --porcelain usually-ignored >actual &&
+ test_cmp expect actual
+'
+
test_done
helper_test store
+test_expect_success 'when xdg file does not exist, xdg file not created' '
+ test_path_is_missing "$HOME/.config/git/credentials" &&
+ test -s "$HOME/.git-credentials"
+'
+
+test_expect_success 'setup xdg file' '
+ rm -f "$HOME/.git-credentials" &&
+ mkdir -p "$HOME/.config/git" &&
+ >"$HOME/.config/git/credentials"
+'
+
+helper_test store
+
+test_expect_success 'when xdg file exists, home file not created' '
+ test -s "$HOME/.config/git/credentials" &&
+ test_path_is_missing "$HOME/.git-credentials"
+'
+
+test_expect_success 'setup custom xdg file' '
+ rm -f "$HOME/.git-credentials" &&
+ rm -f "$HOME/.config/git/credentials" &&
+ mkdir -p "$HOME/xdg/git" &&
+ >"$HOME/xdg/git/credentials"
+'
+
+XDG_CONFIG_HOME="$HOME/xdg"
+export XDG_CONFIG_HOME
+helper_test store
+unset XDG_CONFIG_HOME
+
+test_expect_success 'if custom xdg file exists, home and xdg files not created' '
+ test_when_finished "rm -f $HOME/xdg/git/credentials" &&
+ test -s "$HOME/xdg/git/credentials" &&
+ test_path_is_missing "$HOME/.git-credentials" &&
+ test_path_is_missing "$HOME/.config/git/credentials"
+'
+
+test_expect_success 'get: use home file if both home and xdg files have matches' '
+ echo "https://home-user:home-pass@example.com" >"$HOME/.git-credentials" &&
+ mkdir -p "$HOME/.config/git" &&
+ echo "https://xdg-user:xdg-pass@example.com" >"$HOME/.config/git/credentials" &&
+ check fill store <<-\EOF
+ protocol=https
+ host=example.com
+ --
+ protocol=https
+ host=example.com
+ username=home-user
+ password=home-pass
+ --
+ EOF
+'
+
+test_expect_success 'get: use xdg file if home file has no matches' '
+ >"$HOME/.git-credentials" &&
+ mkdir -p "$HOME/.config/git" &&
+ echo "https://xdg-user:xdg-pass@example.com" >"$HOME/.config/git/credentials" &&
+ check fill store <<-\EOF
+ protocol=https
+ host=example.com
+ --
+ protocol=https
+ host=example.com
+ username=xdg-user
+ password=xdg-pass
+ --
+ EOF
+'
+
+test_expect_success POSIXPERM 'get: use xdg file if home file is unreadable' '
+ echo "https://home-user:home-pass@example.com" >"$HOME/.git-credentials" &&
+ chmod -r "$HOME/.git-credentials" &&
+ mkdir -p "$HOME/.config/git" &&
+ echo "https://xdg-user:xdg-pass@example.com" >"$HOME/.config/git/credentials" &&
+ check fill store <<-\EOF
+ protocol=https
+ host=example.com
+ --
+ protocol=https
+ host=example.com
+ username=xdg-user
+ password=xdg-pass
+ --
+ EOF
+'
+
+test_expect_success 'store: if both xdg and home files exist, only store in home file' '
+ >"$HOME/.git-credentials" &&
+ mkdir -p "$HOME/.config/git" &&
+ >"$HOME/.config/git/credentials" &&
+ check approve store <<-\EOF &&
+ protocol=https
+ host=example.com
+ username=store-user
+ password=store-pass
+ EOF
+ echo "https://store-user:store-pass@example.com" >expected &&
+ test_cmp expected "$HOME/.git-credentials" &&
+ test_must_be_empty "$HOME/.config/git/credentials"
+'
+
+
+test_expect_success 'erase: erase matching credentials from both xdg and home files' '
+ echo "https://home-user:home-pass@example.com" >"$HOME/.git-credentials" &&
+ mkdir -p "$HOME/.config/git" &&
+ echo "https://xdg-user:xdg-pass@example.com" >"$HOME/.config/git/credentials" &&
+ check reject store <<-\EOF &&
+ protocol=https
+ host=example.com
+ EOF
+ test_must_be_empty "$HOME/.git-credentials" &&
+ test_must_be_empty "$HOME/.config/git/credentials"
+'
+
test_done
)
'
-test_expect_success 'no file/rev ambiguity check inside a bare repo' '
+test_expect_success 'no file/rev ambiguity check inside a bare repo (explicit GIT_DIR)' '
+ test_when_finished "rm -fr foo.git" &&
git clone -s --bare .git foo.git &&
(
cd foo.git &&
+ # older Git needed help by exporting GIT_DIR=.
+ # to realize that it is inside a bare repository.
+ # We keep this test around for regression testing.
GIT_DIR=. git show -s HEAD
)
'
-# This still does not work as it should...
-: test_expect_success 'no file/rev ambiguity check inside a bare repo' '
+test_expect_success 'no file/rev ambiguity check inside a bare repo' '
+ test_when_finished "rm -fr foo.git" &&
git clone -s --bare .git foo.git &&
(
cd foo.git &&
'
test_expect_success SYMLINKS 'detection should not be fooled by a symlink' '
- rm -fr foo.git &&
git clone -s .git another &&
ln -s another yetanother &&
(
test_expect_success 'stdin update ref fails with wrong old value' '
echo "update $c $m $m~1" >stdin &&
test_must_fail git update-ref --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$c'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$c'"'"'" err &&
test_must_fail git rev-parse --verify -q $c
'
test_expect_success 'stdin delete ref fails with wrong old value' '
echo "delete $a $m~1" >stdin &&
test_must_fail git update-ref --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$a'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$a'"'"'" err &&
git rev-parse $m >expect &&
git rev-parse $a >actual &&
test_cmp expect actual
update $c ''
EOF
test_must_fail git update-ref --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$c'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$c'"'"'" err &&
git rev-parse $m >expect &&
git rev-parse $a >actual &&
test_cmp expect actual &&
test_expect_success 'stdin -z update ref fails with wrong old value' '
printf $F "update $c" "$m" "$m~1" >stdin &&
test_must_fail git update-ref -z --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$c'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$c'"'"'" err &&
test_must_fail git rev-parse --verify -q $c
'
git rev-parse "$c" >expect &&
printf $F "create $c" "$m~1" >stdin &&
test_must_fail git update-ref -z --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$c'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$c'"'"'" err &&
git rev-parse "$c" >actual &&
test_cmp expect actual
'
test_expect_success 'stdin -z delete ref fails with wrong old value' '
printf $F "delete $a" "$m~1" >stdin &&
test_must_fail git update-ref -z --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$a'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$a'"'"'" err &&
git rev-parse $m >expect &&
git rev-parse $a >actual &&
test_cmp expect actual
git update-ref $c $m &&
printf $F "update $a" "$m" "$m" "update $b" "$m" "$m" "update $c" "$m" "$Z" >stdin &&
test_must_fail git update-ref -z --stdin <stdin 2>err &&
- grep "fatal: Cannot lock the ref '"'"'$c'"'"'" err &&
+ grep "fatal: Cannot lock ref '"'"'$c'"'"'" err &&
git rev-parse $m >expect &&
git rev-parse $a >actual &&
test_cmp expect actual &&
test_must_fail git rev-parse --verify -q $c
'
+run_with_limited_open_files () {
+ (ulimit -n 32 && "$@")
+}
+
+test_lazy_prereq ULIMIT_FILE_DESCRIPTORS 'run_with_limited_open_files true'
+
+test_expect_success ULIMIT_FILE_DESCRIPTORS 'large transaction creating branches does not burst open file limit' '
+(
+ for i in $(test_seq 33)
+ do
+ echo "create refs/heads/$i HEAD"
+ done >large_input &&
+ run_with_limited_open_files git update-ref --stdin <large_input &&
+ git rev-parse --verify -q refs/heads/33
+)
+'
+
+test_expect_success ULIMIT_FILE_DESCRIPTORS 'large transaction deleting branches does not burst open file limit' '
+(
+ for i in $(test_seq 33)
+ do
+ echo "delete refs/heads/$i HEAD"
+ done >large_input &&
+ run_with_limited_open_files git update-ref --stdin <large_input &&
+ test_must_fail git rev-parse --verify -q refs/heads/33
+)
+'
+
test_done
--- /dev/null
+#!/bin/sh
+
+test_description='Test git update-ref with D/F conflicts'
+. ./test-lib.sh
+
+test_update_rejected () {
+ prefix="$1" &&
+ before="$2" &&
+ pack="$3" &&
+ create="$4" &&
+ error="$5" &&
+ printf "create $prefix/%s $C\n" $before |
+ git update-ref --stdin &&
+ git for-each-ref $prefix >unchanged &&
+ if $pack
+ then
+ git pack-refs --all
+ fi &&
+ printf "create $prefix/%s $C\n" $create >input &&
+ test_must_fail git update-ref --stdin <input 2>output.err &&
+ grep -F "$error" output.err &&
+ git for-each-ref $prefix >actual &&
+ test_cmp unchanged actual
+}
+
+Q="'"
+
+test_expect_success 'setup' '
+
+ git commit --allow-empty -m Initial &&
+ C=$(git rev-parse HEAD)
+
+'
+
+test_expect_success 'existing loose ref is a simple prefix of new' '
+
+ prefix=refs/1l &&
+ test_update_rejected $prefix "a c e" false "b c/x d" \
+ "$Q$prefix/c$Q exists; cannot create $Q$prefix/c/x$Q"
+
+'
+
+test_expect_success 'existing packed ref is a simple prefix of new' '
+
+ prefix=refs/1p &&
+ test_update_rejected $prefix "a c e" true "b c/x d" \
+ "$Q$prefix/c$Q exists; cannot create $Q$prefix/c/x$Q"
+
+'
+
+test_expect_success 'existing loose ref is a deeper prefix of new' '
+
+ prefix=refs/2l &&
+ test_update_rejected $prefix "a c e" false "b c/x/y d" \
+ "$Q$prefix/c$Q exists; cannot create $Q$prefix/c/x/y$Q"
+
+'
+
+test_expect_success 'existing packed ref is a deeper prefix of new' '
+
+ prefix=refs/2p &&
+ test_update_rejected $prefix "a c e" true "b c/x/y d" \
+ "$Q$prefix/c$Q exists; cannot create $Q$prefix/c/x/y$Q"
+
+'
+
+test_expect_success 'new ref is a simple prefix of existing loose' '
+
+ prefix=refs/3l &&
+ test_update_rejected $prefix "a c/x e" false "b c d" \
+ "$Q$prefix/c/x$Q exists; cannot create $Q$prefix/c$Q"
+
+'
+
+test_expect_success 'new ref is a simple prefix of existing packed' '
+
+ prefix=refs/3p &&
+ test_update_rejected $prefix "a c/x e" true "b c d" \
+ "$Q$prefix/c/x$Q exists; cannot create $Q$prefix/c$Q"
+
+'
+
+test_expect_success 'new ref is a deeper prefix of existing loose' '
+
+ prefix=refs/4l &&
+ test_update_rejected $prefix "a c/x/y e" false "b c d" \
+ "$Q$prefix/c/x/y$Q exists; cannot create $Q$prefix/c$Q"
+
+'
+
+test_expect_success 'new ref is a deeper prefix of existing packed' '
+
+ prefix=refs/4p &&
+ test_update_rejected $prefix "a c/x/y e" true "b c d" \
+ "$Q$prefix/c/x/y$Q exists; cannot create $Q$prefix/c$Q"
+
+'
+
+test_expect_success 'one new ref is a simple prefix of another' '
+
+ prefix=refs/5 &&
+ test_update_rejected $prefix "a e" false "b c c/x d" \
+ "cannot process $Q$prefix/c$Q and $Q$prefix/c/x$Q at the same time"
+
+'
+
+test_done
grep "^# Rebase ..* onto ..* ([0-9]" actual
'
+test_expect_success 'rebase -i commits that overwrite untracked files (pick)' '
+ git checkout --force branch2 &&
+ git clean -f &&
+ set_fake_editor &&
+ FAKE_LINES="edit 1 2" git rebase -i A &&
+ test_cmp_rev HEAD F &&
+ test_path_is_missing file6 &&
+ >file6 &&
+ test_must_fail git rebase --continue &&
+ test_cmp_rev HEAD F &&
+ rm file6 &&
+ git rebase --continue &&
+ test_cmp_rev HEAD I
+'
+
+test_expect_success 'rebase -i commits that overwrite untracked files (squash)' '
+ git checkout --force branch2 &&
+ git clean -f &&
+ git tag original-branch2 &&
+ set_fake_editor &&
+ FAKE_LINES="edit 1 squash 2" git rebase -i A &&
+ test_cmp_rev HEAD F &&
+ test_path_is_missing file6 &&
+ >file6 &&
+ test_must_fail git rebase --continue &&
+ test_cmp_rev HEAD F &&
+ rm file6 &&
+ git rebase --continue &&
+ test $(git cat-file commit HEAD | sed -ne \$p) = I &&
+ git reset --hard original-branch2
+'
+
+test_expect_success 'rebase -i commits that overwrite untracked files (no ff)' '
+ git checkout --force branch2 &&
+ git clean -f &&
+ set_fake_editor &&
+ FAKE_LINES="edit 1 2" git rebase -i --no-ff A &&
+ test $(git cat-file commit HEAD | sed -ne \$p) = F &&
+ test_path_is_missing file6 &&
+ >file6 &&
+ test_must_fail git rebase --continue &&
+ test $(git cat-file commit HEAD | sed -ne \$p) = F &&
+ rm file6 &&
+ git rebase --continue &&
+ test $(git cat-file commit HEAD | sed -ne \$p) = I
+'
+
test_done
'
+test_expect_success 'add -e notices editor failure' '
+ git reset --hard &&
+ echo change >>file &&
+ test_must_fail env GIT_EDITOR=false git add -e &&
+ test_expect_code 1 git diff --exit-code
+'
+
test_done
Rearranged lines in dir/sub
-commit 59d314ad6f356dd08601a4cd5e530381da3e3c64 (HEAD, refs/heads/master)
+commit 59d314ad6f356dd08601a4cd5e530381da3e3c64 (HEAD -> refs/heads/master)
Merge: 9a6d494 c7a2ab9
Author: A U Thor <author@example.com>
Date: Mon Jun 26 00:04:00 2006 +0000
git commit -m "add bfile"
) &&
test_tick && test_tick &&
+ echo "second" >afile &&
+ git add afile &&
+ git commit -m "second commit" &&
echo "original $dollar" >afile &&
git add afile &&
git commit -m "do not clobber $dollar signs"
)
'
+test_expect_success '--log=1 limits shortlog length' '
+(
+ cd cloned &&
+ git reset --hard HEAD^ &&
+ test "$(cat afile)" = original &&
+ test "$(cat bfile)" = added &&
+ git pull --log=1 &&
+ git log -3 &&
+ git cat-file commit HEAD >result &&
+ grep Dollar result &&
+ ! grep "second commit" result
+)
+'
+
test_done
test_description='fetch/clone from a shallow clone over http'
. ./test-lib.sh
-
-if test -n "$NO_CURL"; then
- skip_all='skipping test, git built without http support'
- test_done
-fi
-
. "$TEST_DIRECTORY"/lib-httpd.sh
start_httpd
test_description='test smart pushing over http via http-backend'
. ./test-lib.sh
-if test -n "$NO_CURL"; then
- skip_all='skipping test, git built without http support'
- test_done
-fi
-
ROOT_PATH="$PWD"
. "$TEST_DIRECTORY"/lib-gpg.sh
. "$TEST_DIRECTORY"/lib-httpd.sh
test_description='push from/to a shallow clone over http'
. ./test-lib.sh
-
-if test -n "$NO_CURL"; then
- say 'skipping test, git built without http support'
- test_done
-fi
-
. "$TEST_DIRECTORY"/lib-httpd.sh
start_httpd
test_description='test dumb fetching over http via static file'
. ./test-lib.sh
-
-if test -n "$NO_CURL"; then
- skip_all='skipping test, git built without http support'
- test_done
-fi
-
. "$TEST_DIRECTORY"/lib-httpd.sh
start_httpd
test_description='test smart fetching over http via http-backend'
. ./test-lib.sh
-
-if test -n "$NO_CURL"; then
- skip_all='skipping test, git built without http support'
- test_done
-fi
-
. "$TEST_DIRECTORY"/lib-httpd.sh
start_httpd
test_description='test git-http-backend'
. ./test-lib.sh
-
-if test -n "$NO_CURL"; then
- skip_all='skipping test, git built without http support'
- test_done
-fi
-
. "$TEST_DIRECTORY"/lib-httpd.sh
start_httpd
'
}
+copy_ssh_wrapper_as () {
+ cp "$TRASH_DIRECTORY/ssh-wrapper" "$1" &&
+ GIT_SSH="$1" &&
+ export GIT_SSH
+}
+
expect_ssh () {
test_when_finished '
(cd "$TRASH_DIRECTORY" && rm -f ssh-expect && >ssh-output)
test_expect_success 'bracketed hostnames are still ssh' '
git clone "[myhost:123]:src" ssh-bracket-clone &&
- expect_ssh myhost '-p 123' src
+ expect_ssh "-p 123" myhost src
+'
+
+test_expect_success 'uplink is not treated as putty' '
+ copy_ssh_wrapper_as "$TRASH_DIRECTORY/uplink" &&
+ git clone "[myhost:123]:src" ssh-bracket-clone-uplink &&
+ expect_ssh "-p 123" myhost src
+'
+
+test_expect_success 'plink is treated specially (as putty)' '
+ copy_ssh_wrapper_as "$TRASH_DIRECTORY/plink" &&
+ git clone "[myhost:123]:src" ssh-bracket-clone-plink-0 &&
+ expect_ssh "-P 123" myhost src
'
+test_expect_success 'plink.exe is treated specially (as putty)' '
+ copy_ssh_wrapper_as "$TRASH_DIRECTORY/plink.exe" &&
+ git clone "[myhost:123]:src" ssh-bracket-clone-plink-1 &&
+ expect_ssh "-P 123" myhost src
+'
+
+test_expect_success 'tortoiseplink is like putty, with extra arguments' '
+ copy_ssh_wrapper_as "$TRASH_DIRECTORY/tortoiseplink" &&
+ git clone "[myhost:123]:src" ssh-bracket-clone-plink-2 &&
+ expect_ssh "-batch -P 123" myhost src
+'
+
+# Reset the GIT_SSH environment variable for clone tests.
+setup_ssh_wrapper
+
counter=0
# $1 url
# $2 none|host
(ulimit -s 128 && "$@")
}
-test_lazy_prereq ULIMIT 'run_with_limited_stack true'
+test_lazy_prereq ULIMIT_STACK_SIZE 'run_with_limited_stack true'
# we require ulimit, this excludes Windows
-test_expect_success ULIMIT '--contains works in a deep repo' '
+test_expect_success ULIMIT_STACK_SIZE '--contains works in a deep repo' '
>expect &&
i=1 &&
while test $i -lt 8000
test "$(git rev-parse HEAD)" = "$(git rev-parse c1)"
'
+test_expect_success 'pull.ff=true overrides merge.ff=false' '
+ git reset --hard c0 &&
+ test_config merge.ff false &&
+ test_config pull.ff true &&
+ git pull . c1 &&
+ test "$(git rev-parse HEAD)" = "$(git rev-parse c1)"
+'
+
test_expect_success 'fast-forward pull creates merge with "false" in pull.ff' '
git reset --hard c0 &&
test_config pull.ff false &&
test $(grep -c " " actual) = 9
'
-test_expect_success 'blaming files with CRLF newlines' '
+test_expect_success 'setup file with CRLF newlines' '
git config core.autocrlf false &&
- printf "testcase\r\n" >crlffile &&
+ printf "testcase\n" >crlffile &&
git add crlffile &&
git commit -m testcase &&
- git -c core.autocrlf=input blame crlffile >actual &&
+ printf "testcase\r\n" >crlffile
+'
+
+test_expect_success 'blame file with CRLF core.autocrlf true' '
+ git config core.autocrlf true &&
+ git blame crlffile >actual &&
+ grep "A U Thor" actual
+'
+
+test_expect_success 'blame file with CRLF attributes text' '
+ git config core.autocrlf false &&
+ echo "crlffile text" >.gitattributes &&
+ git blame crlffile >actual &&
grep "A U Thor" actual
'